Test Report: KVM_Linux_crio 12230

                    
                      4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0:2021-08-10:19925
                    
                

Test fail (5/263)

Order failed test Duration
30 TestAddons/parallel/Ingress 246.64
152 TestMultiNode/serial/DeployApp2Nodes 191.06
153 TestMultiNode/serial/PingHostFrom2Pods 63.32
192 TestPreload 172.03
238 TestNetworkPlugins/group/calico/Start 111.88
x
+
TestAddons/parallel/Ingress (246.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:158: (dbg) TestAddons/parallel/Ingress: waiting 12m0s for pods matching "app.kubernetes.io/name=ingress-nginx" in namespace "ingress-nginx" ...
helpers_test.go:340: "ingress-nginx-admission-create-8dhm5" [9f5e8f07-74d9-4709-aa4d-540600f28dd4] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:158: (dbg) TestAddons/parallel/Ingress: app.kubernetes.io/name=ingress-nginx healthy within 7.91026ms
addons_test.go:165: (dbg) Run:  kubectl --context addons-20210810221736-30291 replace --force -f testdata/nginx-ingv1.yaml

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:180: (dbg) Run:  kubectl --context addons-20210810221736-30291 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:180: (dbg) Done: kubectl --context addons-20210810221736-30291 replace --force -f testdata/nginx-pod-svc.yaml: (1.051581497s)
addons_test.go:185: (dbg) TestAddons/parallel/Ingress: waiting 4m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:340: "nginx" [01307f34-9c2c-44f7-b67a-b6252d68db87] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:340: "nginx" [01307f34-9c2c-44f7-b67a-b6252d68db87] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:185: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 19.02113531s
addons_test.go:204: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210810221736-30291 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:204: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-20210810221736-30291 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (31.871620772s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:204: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210810221736-30291 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:204: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-20210810221736-30291 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (31.998864731s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:204: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210810221736-30291 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:204: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-20210810221736-30291 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (31.943186886s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:224: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:165: (dbg) Run:  kubectl --context addons-20210810221736-30291 replace --force -f testdata/nginx-ingv1.yaml
addons_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210810221736-30291 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:242: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-20210810221736-30291 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (31.878828381s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210810221736-30291 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:242: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-20210810221736-30291 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (32.176919773s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210810221736-30291 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:242: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-20210810221736-30291 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (32.022682783s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:262: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:265: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210810221736-30291 addons disable ingress --alsologtostderr -v=1
addons_test.go:265: (dbg) Done: out/minikube-linux-amd64 -p addons-20210810221736-30291 addons disable ingress --alsologtostderr -v=1: (29.236958739s)
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-20210810221736-30291 -n addons-20210810221736-30291
helpers_test.go:245: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210810221736-30291 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p addons-20210810221736-30291 logs -n 25: (1.337454735s)
helpers_test.go:253: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------|------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                Args                |              Profile               |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------|------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | --all                              | download-only-20210810221716-30291 | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:17:35 UTC | Tue, 10 Aug 2021 22:17:36 UTC |
	| delete  | -p                                 | download-only-20210810221716-30291 | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:17:36 UTC | Tue, 10 Aug 2021 22:17:36 UTC |
	|         | download-only-20210810221716-30291 |                                    |         |         |                               |                               |
	| delete  | -p                                 | download-only-20210810221716-30291 | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:17:36 UTC | Tue, 10 Aug 2021 22:17:36 UTC |
	|         | download-only-20210810221716-30291 |                                    |         |         |                               |                               |
	| start   | -p addons-20210810221736-30291     | addons-20210810221736-30291        | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:17:36 UTC | Tue, 10 Aug 2021 22:20:44 UTC |
	|         | --wait=true --memory=4000          |                                    |         |         |                               |                               |
	|         | --alsologtostderr                  |                                    |         |         |                               |                               |
	|         | --addons=registry                  |                                    |         |         |                               |                               |
	|         | --addons=metrics-server            |                                    |         |         |                               |                               |
	|         | --addons=olm                       |                                    |         |         |                               |                               |
	|         | --addons=volumesnapshots           |                                    |         |         |                               |                               |
	|         | --addons=csi-hostpath-driver       |                                    |         |         |                               |                               |
	|         | --driver=kvm2                      |                                    |         |         |                               |                               |
	|         | --container-runtime=crio           |                                    |         |         |                               |                               |
	|         | --addons=ingress                   |                                    |         |         |                               |                               |
	|         | --addons=helm-tiller               |                                    |         |         |                               |                               |
	| -p      | addons-20210810221736-30291        | addons-20210810221736-30291        | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:20:57 UTC | Tue, 10 Aug 2021 22:21:13 UTC |
	|         | addons enable gcp-auth --force     |                                    |         |         |                               |                               |
	| -p      | addons-20210810221736-30291        | addons-20210810221736-30291        | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:21:25 UTC | Tue, 10 Aug 2021 22:21:25 UTC |
	|         | addons disable helm-tiller         |                                    |         |         |                               |                               |
	|         | --alsologtostderr -v=1             |                                    |         |         |                               |                               |
	| -p      | addons-20210810221736-30291 ip     | addons-20210810221736-30291        | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:21:38 UTC | Tue, 10 Aug 2021 22:21:38 UTC |
	| -p      | addons-20210810221736-30291        | addons-20210810221736-30291        | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:21:38 UTC | Tue, 10 Aug 2021 22:21:39 UTC |
	|         | addons disable registry            |                                    |         |         |                               |                               |
	|         | --alsologtostderr -v=1             |                                    |         |         |                               |                               |
	| -p      | addons-20210810221736-30291        | addons-20210810221736-30291        | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:21:44 UTC | Tue, 10 Aug 2021 22:21:45 UTC |
	|         | addons disable metrics-server      |                                    |         |         |                               |                               |
	|         | --alsologtostderr -v=1             |                                    |         |         |                               |                               |
	| -p      | addons-20210810221736-30291        | addons-20210810221736-30291        | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:22:30 UTC | Tue, 10 Aug 2021 22:22:37 UTC |
	|         | addons disable                     |                                    |         |         |                               |                               |
	|         | csi-hostpath-driver                |                                    |         |         |                               |                               |
	|         | --alsologtostderr -v=1             |                                    |         |         |                               |                               |
	| -p      | addons-20210810221736-30291        | addons-20210810221736-30291        | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:22:37 UTC | Tue, 10 Aug 2021 22:22:38 UTC |
	|         | addons disable volumesnapshots     |                                    |         |         |                               |                               |
	|         | --alsologtostderr -v=1             |                                    |         |         |                               |                               |
	| -p      | addons-20210810221736-30291        | addons-20210810221736-30291        | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:22:15 UTC | Tue, 10 Aug 2021 22:22:44 UTC |
	|         | addons disable gcp-auth            |                                    |         |         |                               |                               |
	|         | --alsologtostderr -v=1             |                                    |         |         |                               |                               |
	| -p      | addons-20210810221736-30291        | addons-20210810221736-30291        | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:25:21 UTC | Tue, 10 Aug 2021 22:25:50 UTC |
	|         | addons disable ingress             |                                    |         |         |                               |                               |
	|         | --alsologtostderr -v=1             |                                    |         |         |                               |                               |
	|---------|------------------------------------|------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/10 22:17:36
	Running on machine: debian-jenkins-agent-3
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0810 22:17:36.594877   30643 out.go:298] Setting OutFile to fd 1 ...
	I0810 22:17:36.594947   30643 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:17:36.594951   30643 out.go:311] Setting ErrFile to fd 2...
	I0810 22:17:36.594954   30643 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:17:36.595063   30643 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin
	I0810 22:17:36.595355   30643 out.go:305] Setting JSON to false
	I0810 22:17:36.630662   30643 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-3","uptime":7217,"bootTime":1628626640,"procs":158,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0810 22:17:36.630745   30643 start.go:121] virtualization: kvm guest
	I0810 22:17:36.633278   30643 out.go:177] * [addons-20210810221736-30291] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0810 22:17:36.634699   30643 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0810 22:17:36.633436   30643 notify.go:169] Checking for updates...
	I0810 22:17:36.636004   30643 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0810 22:17:36.637329   30643 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	I0810 22:17:36.638739   30643 out.go:177]   - MINIKUBE_LOCATION=12230
	I0810 22:17:36.638949   30643 driver.go:335] Setting default libvirt URI to qemu:///system
	I0810 22:17:36.668328   30643 out.go:177] * Using the kvm2 driver based on user configuration
	I0810 22:17:36.668356   30643 start.go:278] selected driver: kvm2
	I0810 22:17:36.668362   30643 start.go:751] validating driver "kvm2" against <nil>
	I0810 22:17:36.668378   30643 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0810 22:17:36.669417   30643 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:17:36.669567   30643 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0810 22:17:36.680324   30643 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0810 22:17:36.680397   30643 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0810 22:17:36.680536   30643 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0810 22:17:36.680560   30643 cni.go:93] Creating CNI manager for ""
	I0810 22:17:36.680567   30643 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0810 22:17:36.680575   30643 start_flags.go:272] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0810 22:17:36.680580   30643 start_flags.go:277] config:
	{Name:addons-20210810221736-30291 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:addons-20210810221736-30291 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Networ
kPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0810 22:17:36.680673   30643 iso.go:123] acquiring lock: {Name:mke8829815ca14456120fefc524d0a056bf82da0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:17:36.682531   30643 out.go:177] * Starting control plane node addons-20210810221736-30291 in cluster addons-20210810221736-30291
	I0810 22:17:36.682552   30643 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0810 22:17:36.682584   30643 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4
	I0810 22:17:36.682621   30643 cache.go:56] Caching tarball of preloaded images
	I0810 22:17:36.682707   30643 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0810 22:17:36.682725   30643 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on crio
	I0810 22:17:36.682986   30643 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810221736-30291/config.json ...
	I0810 22:17:36.683006   30643 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810221736-30291/config.json: {Name:mkd0472f72c7456c7878ce071de1bb6d2e073caa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:17:36.683129   30643 cache.go:205] Successfully downloaded all kic artifacts
	I0810 22:17:36.683153   30643 start.go:313] acquiring machines lock for addons-20210810221736-30291: {Name:mk9647f7c84b24381af0d3e731fd883065efc3b8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0810 22:17:36.683238   30643 start.go:317] acquired machines lock for "addons-20210810221736-30291" in 40.882µs
	I0810 22:17:36.683257   30643 start.go:89] Provisioning new machine with config: &{Name:addons-20210810221736-30291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12122/minikube-v1.22.0-1628238775-12122.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 Clus
terName:addons-20210810221736-30291 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0810 22:17:36.683342   30643 start.go:126] createHost starting for "" (driver="kvm2")
	I0810 22:17:36.685130   30643 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0810 22:17:36.685262   30643 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:17:36.685302   30643 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:17:36.695276   30643 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37067
	I0810 22:17:36.695707   30643 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:17:36.696314   30643 main.go:130] libmachine: Using API Version  1
	I0810 22:17:36.696338   30643 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:17:36.696733   30643 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:17:36.696920   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetMachineName
	I0810 22:17:36.697047   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .DriverName
	I0810 22:17:36.697196   30643 start.go:160] libmachine.API.Create for "addons-20210810221736-30291" (driver="kvm2")
	I0810 22:17:36.697226   30643 client.go:168] LocalClient.Create starting
	I0810 22:17:36.697276   30643 main.go:130] libmachine: Creating CA: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem
	I0810 22:17:36.987960   30643 main.go:130] libmachine: Creating client certificate: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem
	I0810 22:17:37.277845   30643 main.go:130] libmachine: Running pre-create checks...
	I0810 22:17:37.277878   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .PreCreateCheck
	I0810 22:17:37.278380   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetConfigRaw
	I0810 22:17:37.278911   30643 main.go:130] libmachine: Creating machine...
	I0810 22:17:37.278931   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .Create
	I0810 22:17:37.279118   30643 main.go:130] libmachine: (addons-20210810221736-30291) Creating KVM machine...
	I0810 22:17:37.282324   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | found existing default KVM network
	I0810 22:17:37.283561   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | I0810 22:17:37.283389   30667 network.go:240] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:b4:06:d9}}
	I0810 22:17:37.284500   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | I0810 22:17:37.284434   30667 network.go:288] reserving subnet 192.168.50.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.50.0:0xc0001865d8] misses:0}
	I0810 22:17:37.284537   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | I0810 22:17:37.284471   30667 network.go:235] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0810 22:17:37.310901   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | trying to create private KVM network mk-addons-20210810221736-30291 192.168.50.0/24...
	I0810 22:17:37.548571   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | private KVM network mk-addons-20210810221736-30291 192.168.50.0/24 created
	I0810 22:17:37.548616   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | I0810 22:17:37.548523   30667 common.go:108] Making disk image using store path: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	I0810 22:17:37.548655   30643 main.go:130] libmachine: (addons-20210810221736-30291) Setting up store path in /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210810221736-30291 ...
	I0810 22:17:37.548754   30643 main.go:130] libmachine: (addons-20210810221736-30291) Building disk image from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/iso/minikube-v1.22.0-1628238775-12122.iso
	I0810 22:17:37.548810   30643 main.go:130] libmachine: (addons-20210810221736-30291) Downloading /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/iso/minikube-v1.22.0-1628238775-12122.iso...
	I0810 22:17:37.733439   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | I0810 22:17:37.733254   30667 common.go:115] Creating ssh key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210810221736-30291/id_rsa...
	I0810 22:17:37.804190   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | I0810 22:17:37.804027   30667 common.go:121] Creating raw disk image: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210810221736-30291/addons-20210810221736-30291.rawdisk...
	I0810 22:17:37.804222   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | Writing magic tar header
	I0810 22:17:37.804271   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | Writing SSH key tar header
	I0810 22:17:37.804318   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | I0810 22:17:37.804191   30667 common.go:135] Fixing permissions on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210810221736-30291 ...
	I0810 22:17:37.804344   30643 main.go:130] libmachine: (addons-20210810221736-30291) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210810221736-30291 (perms=drwx------)
	I0810 22:17:37.804388   30643 main.go:130] libmachine: (addons-20210810221736-30291) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines (perms=drwxr-xr-x)
	I0810 22:17:37.804420   30643 main.go:130] libmachine: (addons-20210810221736-30291) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube (perms=drwxr-xr-x)
	I0810 22:17:37.804445   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210810221736-30291
	I0810 22:17:37.804472   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines
	I0810 22:17:37.804489   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	I0810 22:17:37.804509   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0
	I0810 22:17:37.804526   30643 main.go:130] libmachine: (addons-20210810221736-30291) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0 (perms=drwxr-xr-x)
	I0810 22:17:37.804543   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0810 22:17:37.804561   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | Checking permissions on dir: /home/jenkins
	I0810 22:17:37.804574   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | Checking permissions on dir: /home
	I0810 22:17:37.804588   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | Skipping /home - not owner
	I0810 22:17:37.804608   30643 main.go:130] libmachine: (addons-20210810221736-30291) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxr-xr-x)
	I0810 22:17:37.804635   30643 main.go:130] libmachine: (addons-20210810221736-30291) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0810 22:17:37.804646   30643 main.go:130] libmachine: (addons-20210810221736-30291) Creating domain...
	I0810 22:17:37.829512   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined MAC address 52:54:00:66:39:7d in network default
	I0810 22:17:37.829962   30643 main.go:130] libmachine: (addons-20210810221736-30291) Ensuring networks are active...
	I0810 22:17:37.829985   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:17:37.831851   30643 main.go:130] libmachine: (addons-20210810221736-30291) Ensuring network default is active
	I0810 22:17:37.832195   30643 main.go:130] libmachine: (addons-20210810221736-30291) Ensuring network mk-addons-20210810221736-30291 is active
	I0810 22:17:37.832693   30643 main.go:130] libmachine: (addons-20210810221736-30291) Getting domain xml...
	I0810 22:17:37.834423   30643 main.go:130] libmachine: (addons-20210810221736-30291) Creating domain...
	I0810 22:17:38.226668   30643 main.go:130] libmachine: (addons-20210810221736-30291) Waiting to get IP...
	I0810 22:17:38.227474   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:17:38.227951   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | unable to find current IP address of domain addons-20210810221736-30291 in network mk-addons-20210810221736-30291
	I0810 22:17:38.227987   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | I0810 22:17:38.227911   30667 retry.go:31] will retry after 263.082536ms: waiting for machine to come up
	I0810 22:17:38.492437   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:17:38.492979   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | unable to find current IP address of domain addons-20210810221736-30291 in network mk-addons-20210810221736-30291
	I0810 22:17:38.493016   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | I0810 22:17:38.492916   30667 retry.go:31] will retry after 381.329545ms: waiting for machine to come up
	I0810 22:17:38.875375   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:17:38.875863   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | unable to find current IP address of domain addons-20210810221736-30291 in network mk-addons-20210810221736-30291
	I0810 22:17:38.875883   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | I0810 22:17:38.875806   30667 retry.go:31] will retry after 422.765636ms: waiting for machine to come up
	I0810 22:17:39.300456   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:17:39.301026   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | unable to find current IP address of domain addons-20210810221736-30291 in network mk-addons-20210810221736-30291
	I0810 22:17:39.301053   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | I0810 22:17:39.300976   30667 retry.go:31] will retry after 473.074753ms: waiting for machine to come up
	I0810 22:17:39.775466   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:17:39.775874   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | unable to find current IP address of domain addons-20210810221736-30291 in network mk-addons-20210810221736-30291
	I0810 22:17:39.775903   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | I0810 22:17:39.775825   30667 retry.go:31] will retry after 587.352751ms: waiting for machine to come up
	I0810 22:17:40.364604   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:17:40.365008   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | unable to find current IP address of domain addons-20210810221736-30291 in network mk-addons-20210810221736-30291
	I0810 22:17:40.365040   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | I0810 22:17:40.364965   30667 retry.go:31] will retry after 834.206799ms: waiting for machine to come up
	I0810 22:17:41.200970   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:17:41.201436   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | unable to find current IP address of domain addons-20210810221736-30291 in network mk-addons-20210810221736-30291
	I0810 22:17:41.201481   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | I0810 22:17:41.201380   30667 retry.go:31] will retry after 746.553905ms: waiting for machine to come up
	I0810 22:17:41.949793   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:17:41.950234   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | unable to find current IP address of domain addons-20210810221736-30291 in network mk-addons-20210810221736-30291
	I0810 22:17:41.950267   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | I0810 22:17:41.950222   30667 retry.go:31] will retry after 987.362415ms: waiting for machine to come up
	I0810 22:17:42.939217   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:17:42.939666   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | unable to find current IP address of domain addons-20210810221736-30291 in network mk-addons-20210810221736-30291
	I0810 22:17:42.939695   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | I0810 22:17:42.939607   30667 retry.go:31] will retry after 1.189835008s: waiting for machine to come up
	I0810 22:17:44.130782   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:17:44.131242   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | unable to find current IP address of domain addons-20210810221736-30291 in network mk-addons-20210810221736-30291
	I0810 22:17:44.131270   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | I0810 22:17:44.131203   30667 retry.go:31] will retry after 1.677229867s: waiting for machine to come up
	I0810 22:17:45.811066   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:17:45.811444   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | unable to find current IP address of domain addons-20210810221736-30291 in network mk-addons-20210810221736-30291
	I0810 22:17:45.811477   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | I0810 22:17:45.811382   30667 retry.go:31] will retry after 2.346016261s: waiting for machine to come up
	I0810 22:17:48.159480   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:17:48.159947   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | unable to find current IP address of domain addons-20210810221736-30291 in network mk-addons-20210810221736-30291
	I0810 22:17:48.159974   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | I0810 22:17:48.159910   30667 retry.go:31] will retry after 3.36678925s: waiting for machine to come up
	I0810 22:17:51.527820   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:17:51.528267   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | unable to find current IP address of domain addons-20210810221736-30291 in network mk-addons-20210810221736-30291
	I0810 22:17:51.528298   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | I0810 22:17:51.528179   30667 retry.go:31] will retry after 3.11822781s: waiting for machine to come up
	I0810 22:17:54.649827   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:17:54.650388   30643 main.go:130] libmachine: (addons-20210810221736-30291) Found IP for machine: 192.168.50.30
	I0810 22:17:54.650414   30643 main.go:130] libmachine: (addons-20210810221736-30291) Reserving static IP address...
	I0810 22:17:54.650431   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has current primary IP address 192.168.50.30 and MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:17:54.650737   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | unable to find host DHCP lease matching {name: "addons-20210810221736-30291", mac: "52:54:00:f8:a1:0c", ip: "192.168.50.30"} in network mk-addons-20210810221736-30291
	I0810 22:17:54.697986   30643 main.go:130] libmachine: (addons-20210810221736-30291) Reserved static IP address: 192.168.50.30
	I0810 22:17:54.698022   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | Getting to WaitForSSH function...
	I0810 22:17:54.698030   30643 main.go:130] libmachine: (addons-20210810221736-30291) Waiting for SSH to be available...
	I0810 22:17:54.703282   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:17:54.703633   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:a1:0c", ip: ""} in network mk-addons-20210810221736-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:17:51 +0000 UTC Type:0 Mac:52:54:00:f8:a1:0c Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f8:a1:0c}
	I0810 22:17:54.703670   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined IP address 192.168.50.30 and MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:17:54.703808   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | Using SSH client type: external
	I0810 22:17:54.703833   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210810221736-30291/id_rsa (-rw-------)
	I0810 22:17:54.703873   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.30 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210810221736-30291/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0810 22:17:54.703891   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | About to run SSH command:
	I0810 22:17:54.703913   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | exit 0
	I0810 22:17:54.828140   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | SSH cmd err, output: <nil>: 
	I0810 22:17:54.828614   30643 main.go:130] libmachine: (addons-20210810221736-30291) KVM machine creation complete!
	I0810 22:17:54.828761   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetConfigRaw
	I0810 22:17:54.829466   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .DriverName
	I0810 22:17:54.829679   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .DriverName
	I0810 22:17:54.829867   30643 main.go:130] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0810 22:17:54.829886   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetState
	I0810 22:17:54.832475   30643 main.go:130] libmachine: Detecting operating system of created instance...
	I0810 22:17:54.832491   30643 main.go:130] libmachine: Waiting for SSH to be available...
	I0810 22:17:54.832500   30643 main.go:130] libmachine: Getting to WaitForSSH function...
	I0810 22:17:54.832506   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHHostname
	I0810 22:17:54.838162   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:17:54.838479   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:a1:0c", ip: ""} in network mk-addons-20210810221736-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:17:51 +0000 UTC Type:0 Mac:52:54:00:f8:a1:0c Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:addons-20210810221736-30291 Clientid:01:52:54:00:f8:a1:0c}
	I0810 22:17:54.838511   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined IP address 192.168.50.30 and MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:17:54.838623   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHPort
	I0810 22:17:54.838770   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHKeyPath
	I0810 22:17:54.838904   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHKeyPath
	I0810 22:17:54.839027   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHUsername
	I0810 22:17:54.839200   30643 main.go:130] libmachine: Using SSH client type: native
	I0810 22:17:54.839439   30643 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.50.30 22 <nil> <nil>}
	I0810 22:17:54.839455   30643 main.go:130] libmachine: About to run SSH command:
	exit 0
	I0810 22:17:54.947297   30643 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0810 22:17:54.947327   30643 main.go:130] libmachine: Detecting the provisioner...
	I0810 22:17:54.947338   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHHostname
	I0810 22:17:54.952343   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:17:54.952669   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:a1:0c", ip: ""} in network mk-addons-20210810221736-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:17:51 +0000 UTC Type:0 Mac:52:54:00:f8:a1:0c Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:addons-20210810221736-30291 Clientid:01:52:54:00:f8:a1:0c}
	I0810 22:17:54.952702   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined IP address 192.168.50.30 and MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:17:54.952831   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHPort
	I0810 22:17:54.952991   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHKeyPath
	I0810 22:17:54.953106   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHKeyPath
	I0810 22:17:54.953217   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHUsername
	I0810 22:17:54.953331   30643 main.go:130] libmachine: Using SSH client type: native
	I0810 22:17:54.953515   30643 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.50.30 22 <nil> <nil>}
	I0810 22:17:54.953529   30643 main.go:130] libmachine: About to run SSH command:
	cat /etc/os-release
	I0810 22:17:55.064703   30643 main.go:130] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2020.02.12
	ID=buildroot
	VERSION_ID=2020.02.12
	PRETTY_NAME="Buildroot 2020.02.12"
	
	I0810 22:17:55.064799   30643 main.go:130] libmachine: found compatible host: buildroot
	I0810 22:17:55.064816   30643 main.go:130] libmachine: Provisioning with buildroot...
	I0810 22:17:55.064828   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetMachineName
	I0810 22:17:55.065089   30643 buildroot.go:166] provisioning hostname "addons-20210810221736-30291"
	I0810 22:17:55.065121   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetMachineName
	I0810 22:17:55.065297   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHHostname
	I0810 22:17:55.070807   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:17:55.071142   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:a1:0c", ip: ""} in network mk-addons-20210810221736-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:17:51 +0000 UTC Type:0 Mac:52:54:00:f8:a1:0c Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:addons-20210810221736-30291 Clientid:01:52:54:00:f8:a1:0c}
	I0810 22:17:55.071180   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined IP address 192.168.50.30 and MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:17:55.071317   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHPort
	I0810 22:17:55.071516   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHKeyPath
	I0810 22:17:55.071678   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHKeyPath
	I0810 22:17:55.071786   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHUsername
	I0810 22:17:55.071925   30643 main.go:130] libmachine: Using SSH client type: native
	I0810 22:17:55.072135   30643 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.50.30 22 <nil> <nil>}
	I0810 22:17:55.072158   30643 main.go:130] libmachine: About to run SSH command:
	sudo hostname addons-20210810221736-30291 && echo "addons-20210810221736-30291" | sudo tee /etc/hostname
	I0810 22:17:55.187477   30643 main.go:130] libmachine: SSH cmd err, output: <nil>: addons-20210810221736-30291
	
	I0810 22:17:55.187511   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHHostname
	I0810 22:17:55.192889   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:17:55.193323   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:a1:0c", ip: ""} in network mk-addons-20210810221736-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:17:51 +0000 UTC Type:0 Mac:52:54:00:f8:a1:0c Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:addons-20210810221736-30291 Clientid:01:52:54:00:f8:a1:0c}
	I0810 22:17:55.193360   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined IP address 192.168.50.30 and MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:17:55.193545   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHPort
	I0810 22:17:55.193716   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHKeyPath
	I0810 22:17:55.193912   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHKeyPath
	I0810 22:17:55.194056   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHUsername
	I0810 22:17:55.194232   30643 main.go:130] libmachine: Using SSH client type: native
	I0810 22:17:55.194416   30643 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.50.30 22 <nil> <nil>}
	I0810 22:17:55.194445   30643 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-20210810221736-30291' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-20210810221736-30291/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-20210810221736-30291' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0810 22:17:55.311668   30643 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0810 22:17:55.311701   30643 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem ServerCertRemotePath:/etc/
docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube}
	I0810 22:17:55.311732   30643 buildroot.go:174] setting up certificates
	I0810 22:17:55.311756   30643 provision.go:83] configureAuth start
	I0810 22:17:55.311767   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetMachineName
	I0810 22:17:55.312092   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetIP
	I0810 22:17:55.317843   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:17:55.318163   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:a1:0c", ip: ""} in network mk-addons-20210810221736-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:17:51 +0000 UTC Type:0 Mac:52:54:00:f8:a1:0c Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:addons-20210810221736-30291 Clientid:01:52:54:00:f8:a1:0c}
	I0810 22:17:55.318196   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined IP address 192.168.50.30 and MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:17:55.318341   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHHostname
	I0810 22:17:55.322859   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:17:55.323136   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:a1:0c", ip: ""} in network mk-addons-20210810221736-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:17:51 +0000 UTC Type:0 Mac:52:54:00:f8:a1:0c Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:addons-20210810221736-30291 Clientid:01:52:54:00:f8:a1:0c}
	I0810 22:17:55.323162   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined IP address 192.168.50.30 and MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:17:55.323302   30643 provision.go:137] copyHostCerts
	I0810 22:17:55.323378   30643 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem (1123 bytes)
	I0810 22:17:55.323483   30643 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem (1679 bytes)
	I0810 22:17:55.323539   30643 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem (1082 bytes)
	I0810 22:17:55.323579   30643 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem org=jenkins.addons-20210810221736-30291 san=[192.168.50.30 192.168.50.30 localhost 127.0.0.1 minikube addons-20210810221736-30291]
	I0810 22:17:55.400407   30643 provision.go:171] copyRemoteCerts
	I0810 22:17:55.400469   30643 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0810 22:17:55.400498   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHHostname
	I0810 22:17:55.405413   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:17:55.405686   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:a1:0c", ip: ""} in network mk-addons-20210810221736-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:17:51 +0000 UTC Type:0 Mac:52:54:00:f8:a1:0c Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:addons-20210810221736-30291 Clientid:01:52:54:00:f8:a1:0c}
	I0810 22:17:55.405716   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined IP address 192.168.50.30 and MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:17:55.405892   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHPort
	I0810 22:17:55.406103   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHKeyPath
	I0810 22:17:55.406222   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHUsername
	I0810 22:17:55.406352   30643 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210810221736-30291/id_rsa Username:docker}
	I0810 22:17:55.487194   30643 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0810 22:17:55.504282   30643 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0810 22:17:55.520672   30643 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0810 22:17:55.536575   30643 provision.go:86] duration metric: configureAuth took 224.802296ms
	I0810 22:17:55.536605   30643 buildroot.go:189] setting minikube options for container-runtime
	I0810 22:17:55.536898   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHHostname
	I0810 22:17:55.542089   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:17:55.542510   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:a1:0c", ip: ""} in network mk-addons-20210810221736-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:17:51 +0000 UTC Type:0 Mac:52:54:00:f8:a1:0c Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:addons-20210810221736-30291 Clientid:01:52:54:00:f8:a1:0c}
	I0810 22:17:55.542550   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined IP address 192.168.50.30 and MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:17:55.542712   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHPort
	I0810 22:17:55.542942   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHKeyPath
	I0810 22:17:55.543095   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHKeyPath
	I0810 22:17:55.543261   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHUsername
	I0810 22:17:55.543402   30643 main.go:130] libmachine: Using SSH client type: native
	I0810 22:17:55.543536   30643 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.50.30 22 <nil> <nil>}
	I0810 22:17:55.543550   30643 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0810 22:17:55.906001   30643 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0810 22:17:55.906058   30643 main.go:130] libmachine: Checking connection to Docker...
	I0810 22:17:55.906068   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetURL
	I0810 22:17:55.908731   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | Using libvirt version 3000000
	I0810 22:17:55.913002   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:17:55.913290   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:a1:0c", ip: ""} in network mk-addons-20210810221736-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:17:51 +0000 UTC Type:0 Mac:52:54:00:f8:a1:0c Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:addons-20210810221736-30291 Clientid:01:52:54:00:f8:a1:0c}
	I0810 22:17:55.913338   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined IP address 192.168.50.30 and MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:17:55.913677   30643 main.go:130] libmachine: Docker is up and running!
	I0810 22:17:55.913696   30643 main.go:130] libmachine: Reticulating splines...
	I0810 22:17:55.913705   30643 client.go:171] LocalClient.Create took 19.216465923s
	I0810 22:17:55.913737   30643 start.go:168] duration metric: libmachine.API.Create for "addons-20210810221736-30291" took 19.216540211s
	I0810 22:17:55.913768   30643 start.go:267] post-start starting for "addons-20210810221736-30291" (driver="kvm2")
	I0810 22:17:55.913777   30643 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0810 22:17:55.913800   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .DriverName
	I0810 22:17:55.914065   30643 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0810 22:17:55.914094   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHHostname
	I0810 22:17:55.918269   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:17:55.918547   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:a1:0c", ip: ""} in network mk-addons-20210810221736-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:17:51 +0000 UTC Type:0 Mac:52:54:00:f8:a1:0c Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:addons-20210810221736-30291 Clientid:01:52:54:00:f8:a1:0c}
	I0810 22:17:55.918574   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined IP address 192.168.50.30 and MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:17:55.918711   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHPort
	I0810 22:17:55.918888   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHKeyPath
	I0810 22:17:55.919022   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHUsername
	I0810 22:17:55.919159   30643 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210810221736-30291/id_rsa Username:docker}
	I0810 22:17:55.998806   30643 ssh_runner.go:149] Run: cat /etc/os-release
	I0810 22:17:56.003290   30643 info.go:137] Remote host: Buildroot 2020.02.12
	I0810 22:17:56.003312   30643 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/addons for local assets ...
	I0810 22:17:56.003370   30643 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files for local assets ...
	I0810 22:17:56.003412   30643 start.go:270] post-start completed in 89.627257ms
	I0810 22:17:56.003453   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetConfigRaw
	I0810 22:17:56.004060   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetIP
	I0810 22:17:56.008721   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:17:56.008982   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:a1:0c", ip: ""} in network mk-addons-20210810221736-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:17:51 +0000 UTC Type:0 Mac:52:54:00:f8:a1:0c Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:addons-20210810221736-30291 Clientid:01:52:54:00:f8:a1:0c}
	I0810 22:17:56.009013   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined IP address 192.168.50.30 and MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:17:56.009224   30643 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810221736-30291/config.json ...
	I0810 22:17:56.009380   30643 start.go:129] duration metric: createHost completed in 19.326028741s
	I0810 22:17:56.009393   30643 start.go:80] releasing machines lock for "addons-20210810221736-30291", held for 19.326145214s
	I0810 22:17:56.009425   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .DriverName
	I0810 22:17:56.009604   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetIP
	I0810 22:17:56.013638   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:17:56.013921   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:a1:0c", ip: ""} in network mk-addons-20210810221736-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:17:51 +0000 UTC Type:0 Mac:52:54:00:f8:a1:0c Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:addons-20210810221736-30291 Clientid:01:52:54:00:f8:a1:0c}
	I0810 22:17:56.013955   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined IP address 192.168.50.30 and MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:17:56.014027   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .DriverName
	I0810 22:17:56.014158   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .DriverName
	I0810 22:17:56.014565   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .DriverName
	I0810 22:17:56.014791   30643 ssh_runner.go:149] Run: systemctl --version
	I0810 22:17:56.015118   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHHostname
	I0810 22:17:56.015254   30643 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0810 22:17:56.015290   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHHostname
	I0810 22:17:56.021912   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:17:56.022259   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:a1:0c", ip: ""} in network mk-addons-20210810221736-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:17:51 +0000 UTC Type:0 Mac:52:54:00:f8:a1:0c Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:addons-20210810221736-30291 Clientid:01:52:54:00:f8:a1:0c}
	I0810 22:17:56.022313   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined IP address 192.168.50.30 and MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:17:56.022430   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHPort
	I0810 22:17:56.022595   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHKeyPath
	I0810 22:17:56.022762   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHUsername
	I0810 22:17:56.022867   30643 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210810221736-30291/id_rsa Username:docker}
	I0810 22:17:56.023059   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:17:56.023332   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:a1:0c", ip: ""} in network mk-addons-20210810221736-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:17:51 +0000 UTC Type:0 Mac:52:54:00:f8:a1:0c Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:addons-20210810221736-30291 Clientid:01:52:54:00:f8:a1:0c}
	I0810 22:17:56.023353   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined IP address 192.168.50.30 and MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:17:56.023476   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHPort
	I0810 22:17:56.023612   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHKeyPath
	I0810 22:17:56.023778   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHUsername
	I0810 22:17:56.023915   30643 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210810221736-30291/id_rsa Username:docker}
	I0810 22:17:56.112969   30643 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0810 22:17:56.113102   30643 ssh_runner.go:149] Run: sudo crictl images --output json
	I0810 22:18:00.128766   30643 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.015609013s)
	I0810 22:18:00.128879   30643 crio.go:420] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.21.3". assuming images are not preloaded.
	I0810 22:18:00.128929   30643 ssh_runner.go:149] Run: which lz4
	I0810 22:18:00.133401   30643 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0810 22:18:00.138361   30643 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0810 22:18:00.138398   30643 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (576184326 bytes)
	I0810 22:18:03.615347   30643 crio.go:362] Took 3.481993 seconds to copy over tarball
	I0810 22:18:03.615435   30643 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0810 22:18:08.965973   30643 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (5.35050296s)
	I0810 22:18:10.445486   30643 crio.go:369] Took 6.830097 seconds t extract the tarball
	I0810 22:18:10.445508   30643 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0810 22:18:10.484499   30643 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0810 22:18:10.497173   30643 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0810 22:18:10.506868   30643 docker.go:153] disabling docker service ...
	I0810 22:18:10.506931   30643 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0810 22:18:10.517253   30643 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0810 22:18:10.525859   30643 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0810 22:18:10.663903   30643 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0810 22:18:10.799453   30643 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0810 22:18:10.810503   30643 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0810 22:18:10.826264   30643 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0810 22:18:10.834503   30643 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0810 22:18:10.841754   30643 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0810 22:18:10.841816   30643 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0810 22:18:10.857791   30643 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0810 22:18:10.865269   30643 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0810 22:18:10.998855   30643 ssh_runner.go:149] Run: sudo systemctl start crio
	I0810 22:18:11.263620   30643 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0810 22:18:11.263691   30643 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0810 22:18:11.268445   30643 start.go:417] Will wait 60s for crictl version
	I0810 22:18:11.268503   30643 ssh_runner.go:149] Run: sudo crictl version
	I0810 22:18:11.302354   30643 start.go:426] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.2
	RuntimeApiVersion:  v1alpha1
	I0810 22:18:11.302467   30643 ssh_runner.go:149] Run: crio --version
	I0810 22:18:11.504217   30643 ssh_runner.go:149] Run: crio --version
	I0810 22:18:12.036230   30643 out.go:177] * Preparing Kubernetes v1.21.3 on CRI-O 1.20.2 ...
	I0810 22:18:12.036288   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetIP
	I0810 22:18:12.041885   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:18:12.042207   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:a1:0c", ip: ""} in network mk-addons-20210810221736-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:17:51 +0000 UTC Type:0 Mac:52:54:00:f8:a1:0c Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:addons-20210810221736-30291 Clientid:01:52:54:00:f8:a1:0c}
	I0810 22:18:12.042263   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined IP address 192.168.50.30 and MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:18:12.042427   30643 ssh_runner.go:149] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0810 22:18:12.047217   30643 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0810 22:18:12.057475   30643 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0810 22:18:12.057526   30643 ssh_runner.go:149] Run: sudo crictl images --output json
	I0810 22:18:12.133192   30643 crio.go:424] all images are preloaded for cri-o runtime.
	I0810 22:18:12.133224   30643 crio.go:333] Images already preloaded, skipping extraction
	I0810 22:18:12.133281   30643 ssh_runner.go:149] Run: sudo crictl images --output json
	I0810 22:18:12.166705   30643 crio.go:424] all images are preloaded for cri-o runtime.
	I0810 22:18:12.166741   30643 cache_images.go:74] Images are preloaded, skipping loading
	I0810 22:18:12.166810   30643 ssh_runner.go:149] Run: crio config
	I0810 22:18:12.250997   30643 cni.go:93] Creating CNI manager for ""
	I0810 22:18:12.251023   30643 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0810 22:18:12.251035   30643 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0810 22:18:12.251053   30643 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.30 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-20210810221736-30291 NodeName:addons-20210810221736-30291 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.30"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.50.30 CgroupDriver:systemd ClientCAFile:/var/lib
/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0810 22:18:12.251228   30643 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.30
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "addons-20210810221736-30291"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.30
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.30"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0810 22:18:12.251315   30643 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --hostname-override=addons-20210810221736-30291 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.30 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:addons-20210810221736-30291 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0810 22:18:12.251365   30643 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0810 22:18:12.259074   30643 binaries.go:44] Found k8s binaries, skipping transfer
	I0810 22:18:12.259158   30643 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0810 22:18:12.266041   30643 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (507 bytes)
	I0810 22:18:12.277444   30643 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0810 22:18:12.288958   30643 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I0810 22:18:12.300867   30643 ssh_runner.go:149] Run: grep 192.168.50.30	control-plane.minikube.internal$ /etc/hosts
	I0810 22:18:12.304799   30643 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.30	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0810 22:18:12.314481   30643 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810221736-30291 for IP: 192.168.50.30
	I0810 22:18:12.314537   30643 certs.go:183] generating minikubeCA CA: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key
	I0810 22:18:12.502194   30643 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt ...
	I0810 22:18:12.502231   30643 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt: {Name:mk8b2ff66e896a02b6230ac0845385b56142d34d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:18:12.502459   30643 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key ...
	I0810 22:18:12.502477   30643 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key: {Name:mkb1ebddd8e111cf4eb543bd5426c36a5e453147 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:18:12.502558   30643 certs.go:183] generating proxyClientCA CA: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key
	I0810 22:18:12.630304   30643 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.crt ...
	I0810 22:18:12.630337   30643 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.crt: {Name:mk0d9a93ac27d5d262b03ecca1c170e034125d37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:18:12.630516   30643 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key ...
	I0810 22:18:12.630528   30643 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key: {Name:mka3fce1362cb2a100bc0c74137f7ab0409e7548 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:18:12.630639   30643 certs.go:294] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810221736-30291/client.key
	I0810 22:18:12.630650   30643 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810221736-30291/client.crt with IP's: []
	I0810 22:18:12.952352   30643 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810221736-30291/client.crt ...
	I0810 22:18:12.952390   30643 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810221736-30291/client.crt: {Name:mkb4c5b5efb1577c2544d6602e30558dd1d2ac68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:18:12.952601   30643 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810221736-30291/client.key ...
	I0810 22:18:12.952618   30643 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810221736-30291/client.key: {Name:mk296e3a4ad3cc28d815f966f13fc6ef0cff2dbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:18:12.952704   30643 certs.go:294] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810221736-30291/apiserver.key.2a920f45
	I0810 22:18:12.952716   30643 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810221736-30291/apiserver.crt.2a920f45 with IP's: [192.168.50.30 10.96.0.1 127.0.0.1 10.0.0.1]
	I0810 22:18:13.040881   30643 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810221736-30291/apiserver.crt.2a920f45 ...
	I0810 22:18:13.040915   30643 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810221736-30291/apiserver.crt.2a920f45: {Name:mk4f2ade6e2311dadb0d708d29344d45eee70163 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:18:13.041094   30643 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810221736-30291/apiserver.key.2a920f45 ...
	I0810 22:18:13.041107   30643 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810221736-30291/apiserver.key.2a920f45: {Name:mk22313097d37f000c867d5320b7854cc2e01578 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:18:13.041183   30643 certs.go:305] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810221736-30291/apiserver.crt.2a920f45 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810221736-30291/apiserver.crt
	I0810 22:18:13.041241   30643 certs.go:309] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810221736-30291/apiserver.key.2a920f45 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810221736-30291/apiserver.key
	I0810 22:18:13.041303   30643 certs.go:294] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810221736-30291/proxy-client.key
	I0810 22:18:13.041316   30643 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810221736-30291/proxy-client.crt with IP's: []
	I0810 22:18:13.168567   30643 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810221736-30291/proxy-client.crt ...
	I0810 22:18:13.168606   30643 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810221736-30291/proxy-client.crt: {Name:mkdab10bf7a015193c97d359fb9331799680ac8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:18:13.168808   30643 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810221736-30291/proxy-client.key ...
	I0810 22:18:13.168823   30643 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810221736-30291/proxy-client.key: {Name:mk7aa2226987fa316584c1dcce03d502dd2ba974 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:18:13.169026   30643 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem (1679 bytes)
	I0810 22:18:13.169072   30643 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem (1082 bytes)
	I0810 22:18:13.169101   30643 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem (1123 bytes)
	I0810 22:18:13.169135   30643 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem (1679 bytes)
	I0810 22:18:13.170179   30643 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810221736-30291/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0810 22:18:13.188509   30643 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810221736-30291/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0810 22:18:13.204976   30643 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810221736-30291/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0810 22:18:13.220640   30643 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810221736-30291/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0810 22:18:13.236631   30643 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0810 22:18:13.252386   30643 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0810 22:18:13.269034   30643 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0810 22:18:13.285001   30643 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0810 22:18:13.300925   30643 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0810 22:18:13.316790   30643 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0810 22:18:13.330072   30643 ssh_runner.go:149] Run: openssl version
	I0810 22:18:13.336534   30643 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0810 22:18:13.344854   30643 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0810 22:18:13.349730   30643 certs.go:416] hashing: -rw-r--r-- 1 root root 1111 Aug 10 22:18 /usr/share/ca-certificates/minikubeCA.pem
	I0810 22:18:13.349769   30643 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0810 22:18:13.355960   30643 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0810 22:18:13.363727   30643 kubeadm.go:390] StartCluster: {Name:addons-20210810221736-30291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12122/minikube-v1.22.0-1628238775-12122.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:addons-2021081
0221736-30291 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.30 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0810 22:18:13.363807   30643 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0810 22:18:13.363845   30643 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0810 22:18:13.397575   30643 cri.go:76] found id: ""
	I0810 22:18:13.397646   30643 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0810 22:18:13.406184   30643 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0810 22:18:13.413054   30643 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0810 22:18:13.421765   30643 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0810 22:18:13.421804   30643 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0810 22:18:13.926902   30643 out.go:204]   - Generating certificates and keys ...
	I0810 22:18:17.012720   30643 out.go:204]   - Booting up control plane ...
	I0810 22:18:33.587293   30643 out.go:204]   - Configuring RBAC rules ...
	I0810 22:18:34.186555   30643 cni.go:93] Creating CNI manager for ""
	I0810 22:18:34.186582   30643 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0810 22:18:34.188215   30643 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0810 22:18:34.188287   30643 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0810 22:18:34.210032   30643 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0810 22:18:34.241076   30643 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0810 22:18:34.241122   30643 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:18:34.241131   30643 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=877a5691753f15214a0c269ac69dcdc5a4d99fcd minikube.k8s.io/name=addons-20210810221736-30291 minikube.k8s.io/updated_at=2021_08_10T22_18_34_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:18:34.529483   30643 ops.go:34] apiserver oom_adj: -16
	I0810 22:18:34.529618   30643 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:18:35.162240   30643 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:18:35.661988   30643 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:18:36.162007   30643 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:18:36.662619   30643 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:18:37.162426   30643 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:18:37.662181   30643 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:18:38.162590   30643 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:18:38.661983   30643 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:18:39.162052   30643 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:18:39.662191   30643 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:18:40.162503   30643 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:18:40.662243   30643 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:18:41.162411   30643 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:18:41.661969   30643 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:18:42.161948   30643 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:18:42.662090   30643 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:18:43.162069   30643 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:18:43.662622   30643 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:18:44.162074   30643 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:18:44.662503   30643 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:18:45.162402   30643 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:18:45.662388   30643 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:18:46.162855   30643 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:18:46.662919   30643 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:18:47.162180   30643 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:18:47.277532   30643 kubeadm.go:985] duration metric: took 13.036468172s to wait for elevateKubeSystemPrivileges.
	I0810 22:18:47.277571   30643 kubeadm.go:392] StartCluster complete in 33.913850155s
	I0810 22:18:47.277606   30643 settings.go:142] acquiring lock: {Name:mk9de8b97604ec8ec02e9734983b03b6308517c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:18:47.277786   30643 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0810 22:18:47.278426   30643 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig: {Name:mkb7fc7bcea695301999150daa705ac3e8a4c8a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:18:47.812034   30643 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "addons-20210810221736-30291" rescaled to 1
	I0810 22:18:47.812102   30643 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.50.30 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0810 22:18:47.812157   30643 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0810 22:18:47.812200   30643 addons.go:342] enableAddons start: toEnable=map[], additional=[registry metrics-server olm volumesnapshots csi-hostpath-driver ingress helm-tiller]
	I0810 22:18:47.813674   30643 out.go:177] * Verifying Kubernetes components...
	I0810 22:18:47.812285   30643 addons.go:59] Setting volumesnapshots=true in profile "addons-20210810221736-30291"
	I0810 22:18:47.813769   30643 addons.go:135] Setting addon volumesnapshots=true in "addons-20210810221736-30291"
	I0810 22:18:47.812301   30643 addons.go:59] Setting helm-tiller=true in profile "addons-20210810221736-30291"
	I0810 22:18:47.813846   30643 host.go:66] Checking if "addons-20210810221736-30291" exists ...
	I0810 22:18:47.813846   30643 addons.go:135] Setting addon helm-tiller=true in "addons-20210810221736-30291"
	I0810 22:18:47.812303   30643 addons.go:59] Setting default-storageclass=true in profile "addons-20210810221736-30291"
	I0810 22:18:47.814058   30643 host.go:66] Checking if "addons-20210810221736-30291" exists ...
	I0810 22:18:47.814087   30643 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-20210810221736-30291"
	I0810 22:18:47.812324   30643 addons.go:59] Setting metrics-server=true in profile "addons-20210810221736-30291"
	I0810 22:18:47.814189   30643 addons.go:135] Setting addon metrics-server=true in "addons-20210810221736-30291"
	I0810 22:18:47.814238   30643 host.go:66] Checking if "addons-20210810221736-30291" exists ...
	I0810 22:18:47.812328   30643 addons.go:59] Setting registry=true in profile "addons-20210810221736-30291"
	I0810 22:18:47.812328   30643 addons.go:59] Setting csi-hostpath-driver=true in profile "addons-20210810221736-30291"
	I0810 22:18:47.812346   30643 addons.go:59] Setting olm=true in profile "addons-20210810221736-30291"
	I0810 22:18:47.813739   30643 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0810 22:18:47.814268   30643 addons.go:135] Setting addon registry=true in "addons-20210810221736-30291"
	I0810 22:18:47.814306   30643 host.go:66] Checking if "addons-20210810221736-30291" exists ...
	I0810 22:18:47.812312   30643 addons.go:59] Setting ingress=true in profile "addons-20210810221736-30291"
	I0810 22:18:47.814352   30643 addons.go:135] Setting addon ingress=true in "addons-20210810221736-30291"
	I0810 22:18:47.814382   30643 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:18:47.814392   30643 host.go:66] Checking if "addons-20210810221736-30291" exists ...
	I0810 22:18:47.814415   30643 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:18:47.814465   30643 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:18:47.814487   30643 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:18:47.814581   30643 addons.go:135] Setting addon csi-hostpath-driver=true in "addons-20210810221736-30291"
	I0810 22:18:47.814582   30643 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:18:47.814610   30643 addons.go:135] Setting addon olm=true in "addons-20210810221736-30291"
	I0810 22:18:47.814618   30643 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:18:47.814635   30643 host.go:66] Checking if "addons-20210810221736-30291" exists ...
	I0810 22:18:47.814643   30643 host.go:66] Checking if "addons-20210810221736-30291" exists ...
	I0810 22:18:47.814775   30643 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:18:47.814808   30643 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:18:47.814815   30643 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:18:47.812333   30643 addons.go:59] Setting storage-provisioner=true in profile "addons-20210810221736-30291"
	I0810 22:18:47.814849   30643 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:18:47.814851   30643 addons.go:135] Setting addon storage-provisioner=true in "addons-20210810221736-30291"
	W0810 22:18:47.814861   30643 addons.go:147] addon storage-provisioner should already be in state true
	I0810 22:18:47.814880   30643 host.go:66] Checking if "addons-20210810221736-30291" exists ...
	I0810 22:18:47.814930   30643 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:18:47.814958   30643 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:18:47.815102   30643 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:18:47.815172   30643 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:18:47.815287   30643 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:18:47.815326   30643 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:18:47.815552   30643 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:18:47.815588   30643 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:18:47.830249   30643 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37059
	I0810 22:18:47.830508   30643 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44805
	I0810 22:18:47.830643   30643 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43573
	I0810 22:18:47.830856   30643 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:18:47.830898   30643 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:18:47.831010   30643 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:18:47.831454   30643 main.go:130] libmachine: Using API Version  1
	I0810 22:18:47.831478   30643 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:18:47.831457   30643 main.go:130] libmachine: Using API Version  1
	I0810 22:18:47.831537   30643 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:18:47.831576   30643 main.go:130] libmachine: Using API Version  1
	I0810 22:18:47.831588   30643 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:18:47.831883   30643 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:18:47.831907   30643 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:18:47.831927   30643 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:18:47.832536   30643 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:18:47.832546   30643 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:18:47.832539   30643 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:18:47.832582   30643 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:18:47.832587   30643 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:18:47.832617   30643 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:18:47.839190   30643 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:39471
	I0810 22:18:47.839621   30643 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:18:47.840177   30643 main.go:130] libmachine: Using API Version  1
	I0810 22:18:47.840209   30643 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:18:47.840559   30643 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:18:47.841167   30643 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:18:47.841219   30643 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:18:47.843205   30643 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42971
	I0810 22:18:47.843566   30643 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:18:47.844056   30643 main.go:130] libmachine: Using API Version  1
	I0810 22:18:47.844092   30643 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:18:47.844449   30643 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:18:47.844646   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetState
	I0810 22:18:47.845430   30643 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43359
	I0810 22:18:47.845958   30643 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:18:47.846497   30643 main.go:130] libmachine: Using API Version  1
	I0810 22:18:47.846516   30643 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:18:47.846953   30643 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:18:47.847452   30643 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33895
	I0810 22:18:47.847546   30643 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:18:47.847589   30643 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:18:47.847860   30643 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:18:47.848318   30643 main.go:130] libmachine: Using API Version  1
	I0810 22:18:47.848342   30643 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:18:47.848714   30643 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:18:47.849427   30643 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37787
	I0810 22:18:47.854444   30643 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42247
	I0810 22:18:47.854606   30643 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45247
	I0810 22:18:47.862420   30643 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:36241
	I0810 22:18:47.864690   30643 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:18:47.864697   30643 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:18:47.864707   30643 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:18:47.864746   30643 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:18:47.865275   30643 main.go:130] libmachine: Using API Version  1
	I0810 22:18:47.865295   30643 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:18:47.867284   30643 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:18:47.867283   30643 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:18:47.867352   30643 main.go:130] libmachine: Using API Version  1
	I0810 22:18:47.867373   30643 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:18:47.867302   30643 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:18:47.867459   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetState
	I0810 22:18:47.867822   30643 main.go:130] libmachine: Using API Version  1
	I0810 22:18:47.867841   30643 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:18:47.867873   30643 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:18:47.867947   30643 main.go:130] libmachine: Using API Version  1
	I0810 22:18:47.867967   30643 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:18:47.868089   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetState
	I0810 22:18:47.868201   30643 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:18:47.868410   30643 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:18:47.868750   30643 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:18:47.868787   30643 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:18:47.868974   30643 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:18:47.869013   30643 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:18:47.871602   30643 addons.go:135] Setting addon default-storageclass=true in "addons-20210810221736-30291"
	W0810 22:18:47.871627   30643 addons.go:147] addon default-storageclass should already be in state true
	I0810 22:18:47.871660   30643 host.go:66] Checking if "addons-20210810221736-30291" exists ...
	I0810 22:18:47.872047   30643 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:18:47.872086   30643 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:18:47.872471   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .DriverName
	I0810 22:18:47.872731   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .DriverName
	I0810 22:18:47.874713   30643 out.go:177]   - Using image gcr.io/google_containers/kube-registry-proxy:0.4
	I0810 22:18:47.876389   30643 out.go:177]   - Using image k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0
	I0810 22:18:47.876468   30643 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0810 22:18:47.877905   30643 out.go:177]   - Using image registry:2.7.1
	I0810 22:18:47.878017   30643 addons.go:275] installing /etc/kubernetes/addons/registry-rc.yaml
	I0810 22:18:47.878027   30643 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (788 bytes)
	I0810 22:18:47.878048   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHHostname
	I0810 22:18:47.876491   30643 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0810 22:18:47.878112   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHHostname
	I0810 22:18:47.878647   30643 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37533
	I0810 22:18:47.879170   30643 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:18:47.879732   30643 main.go:130] libmachine: Using API Version  1
	I0810 22:18:47.879750   30643 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:18:47.880340   30643 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:18:47.880599   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetState
	I0810 22:18:47.881302   30643 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45507
	I0810 22:18:47.881595   30643 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44605
	I0810 22:18:47.881849   30643 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:18:47.883517   30643 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:18:47.883640   30643 main.go:130] libmachine: Using API Version  1
	I0810 22:18:47.883667   30643 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:18:47.884236   30643 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:18:47.884532   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .DriverName
	I0810 22:18:47.884553   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetState
	I0810 22:18:47.884648   30643 main.go:130] libmachine: Using API Version  1
	I0810 22:18:47.884674   30643 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:18:47.886459   30643 out.go:177]   - Using image gcr.io/kubernetes-helm/tiller:v2.16.12
	I0810 22:18:47.885104   30643 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:18:47.886567   30643 addons.go:275] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0810 22:18:47.886578   30643 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2433 bytes)
	I0810 22:18:47.886602   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHHostname
	I0810 22:18:47.888447   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetState
	I0810 22:18:47.889840   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:18:47.890966   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:18:47.891119   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:a1:0c", ip: ""} in network mk-addons-20210810221736-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:17:51 +0000 UTC Type:0 Mac:52:54:00:f8:a1:0c Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:addons-20210810221736-30291 Clientid:01:52:54:00:f8:a1:0c}
	I0810 22:18:47.891153   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined IP address 192.168.50.30 and MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:18:47.891497   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .DriverName
	I0810 22:18:47.891539   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHPort
	I0810 22:18:47.891610   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:a1:0c", ip: ""} in network mk-addons-20210810221736-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:17:51 +0000 UTC Type:0 Mac:52:54:00:f8:a1:0c Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:addons-20210810221736-30291 Clientid:01:52:54:00:f8:a1:0c}
	I0810 22:18:47.891626   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined IP address 192.168.50.30 and MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:18:47.891656   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHKeyPath
	I0810 22:18:47.891773   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHPort
	I0810 22:18:47.891827   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHUsername
	I0810 22:18:47.891926   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHKeyPath
	I0810 22:18:47.891973   30643 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210810221736-30291/id_rsa Username:docker}
	I0810 22:18:47.893534   30643 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0
	I0810 22:18:47.892303   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHUsername
	I0810 22:18:47.895104   30643 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0810 22:18:47.893733   30643 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210810221736-30291/id_rsa Username:docker}
	I0810 22:18:47.893936   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .DriverName
	I0810 22:18:47.894273   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:18:47.894809   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHPort
	I0810 22:18:47.896685   30643 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0810 22:18:47.896735   30643 addons.go:275] installing /etc/kubernetes/addons/ingress-configmap.yaml
	I0810 22:18:47.896744   30643 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-configmap.yaml (1865 bytes)
	I0810 22:18:47.896762   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHHostname
	I0810 22:18:47.896820   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHKeyPath
	I0810 22:18:47.896830   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:a1:0c", ip: ""} in network mk-addons-20210810221736-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:17:51 +0000 UTC Type:0 Mac:52:54:00:f8:a1:0c Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:addons-20210810221736-30291 Clientid:01:52:54:00:f8:a1:0c}
	I0810 22:18:47.896855   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined IP address 192.168.50.30 and MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:18:47.897823   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHUsername
	I0810 22:18:47.897903   30643 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42715
	I0810 22:18:47.898045   30643 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210810221736-30291/id_rsa Username:docker}
	I0810 22:18:47.898264   30643 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:18:47.899980   30643 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0810 22:18:47.898843   30643 main.go:130] libmachine: Using API Version  1
	I0810 22:18:47.900105   30643 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:18:47.900121   30643 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:36957
	I0810 22:18:47.899074   30643 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37741
	I0810 22:18:47.900077   30643 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0810 22:18:47.900143   30643 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0810 22:18:47.900159   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHHostname
	I0810 22:18:47.900854   30643 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:18:47.901055   30643 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:18:47.901214   30643 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:18:47.901233   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetState
	I0810 22:18:47.901551   30643 main.go:130] libmachine: Using API Version  1
	I0810 22:18:47.901574   30643 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:18:47.901685   30643 main.go:130] libmachine: Using API Version  1
	I0810 22:18:47.901711   30643 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:18:47.901932   30643 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:18:47.902116   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetState
	I0810 22:18:47.902830   30643 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:18:47.902897   30643 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45857
	I0810 22:18:47.903005   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetState
	I0810 22:18:47.903388   30643 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:18:47.903612   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:18:47.903857   30643 main.go:130] libmachine: Using API Version  1
	I0810 22:18:47.903875   30643 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:18:47.904257   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:a1:0c", ip: ""} in network mk-addons-20210810221736-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:17:51 +0000 UTC Type:0 Mac:52:54:00:f8:a1:0c Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:addons-20210810221736-30291 Clientid:01:52:54:00:f8:a1:0c}
	I0810 22:18:47.904277   30643 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:18:47.904279   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined IP address 192.168.50.30 and MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:18:47.904415   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHPort
	I0810 22:18:47.904562   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHKeyPath
	I0810 22:18:47.904943   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHUsername
	I0810 22:18:47.905057   30643 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:18:47.905101   30643 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:18:47.905170   30643 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210810221736-30291/id_rsa Username:docker}
	I0810 22:18:47.905418   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .DriverName
	I0810 22:18:47.907374   30643 out.go:177]   - Using image k8s.gcr.io/metrics-server/metrics-server:v0.4.2
	I0810 22:18:47.907430   30643 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0810 22:18:47.907442   30643 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0810 22:18:47.907460   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHHostname
	I0810 22:18:47.906025   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .DriverName
	I0810 22:18:47.909077   30643 out.go:177]   - Using image k8s.gcr.io/sig-storage/livenessprobe:v2.2.0
	I0810 22:18:47.907853   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .DriverName
	I0810 22:18:47.908171   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:18:47.908930   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHPort
	I0810 22:18:47.911972   30643 out.go:177]   - Using image quay.io/operator-framework/olm:v0.17.0
	I0810 22:18:47.910728   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:a1:0c", ip: ""} in network mk-addons-20210810221736-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:17:51 +0000 UTC Type:0 Mac:52:54:00:f8:a1:0c Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:addons-20210810221736-30291 Clientid:01:52:54:00:f8:a1:0c}
	I0810 22:18:47.912025   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined IP address 192.168.50.30 and MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:18:47.910744   30643 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1
	I0810 22:18:47.910946   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHKeyPath
	I0810 22:18:47.913355   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:18:47.913466   30643 out.go:177]   - Using image quay.io/operator-framework/upstream-community-operators:07bbc13
	I0810 22:18:47.913650   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHUsername
	I0810 22:18:47.914080   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHPort
	I0810 22:18:47.914865   30643 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0
	I0810 22:18:47.919431   30643 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0
	I0810 22:18:47.914954   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:a1:0c", ip: ""} in network mk-addons-20210810221736-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:17:51 +0000 UTC Type:0 Mac:52:54:00:f8:a1:0c Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:addons-20210810221736-30291 Clientid:01:52:54:00:f8:a1:0c}
	I0810 22:18:47.915057   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHKeyPath
	I0810 22:18:47.915056   30643 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210810221736-30291/id_rsa Username:docker}
	I0810 22:18:47.917279   30643 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35161
	I0810 22:18:47.921030   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined IP address 192.168.50.30 and MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:18:47.921031   30643 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-agent:v0.2.0
	I0810 22:18:47.922417   30643 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-controller:v0.2.0
	I0810 22:18:47.921391   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHUsername
	I0810 22:18:47.921391   30643 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:18:47.922668   30643 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210810221736-30291/id_rsa Username:docker}
	I0810 22:18:47.923830   30643 out.go:177]   - Using image k8s.gcr.io/sig-storage/hostpathplugin:v1.6.0
	I0810 22:18:47.925353   30643 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-attacher:v3.1.0
	I0810 22:18:47.924367   30643 main.go:130] libmachine: Using API Version  1
	I0810 22:18:47.925393   30643 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:18:47.926819   30643 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-resizer:v1.1.0
	I0810 22:18:47.925804   30643 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:18:47.926885   30643 addons.go:275] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0810 22:18:47.926899   30643 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0810 22:18:47.926923   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHHostname
	I0810 22:18:47.927011   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetState
	I0810 22:18:47.931569   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .DriverName
	I0810 22:18:47.931822   30643 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0810 22:18:47.931839   30643 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0810 22:18:47.931857   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHHostname
	I0810 22:18:47.933015   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:18:47.933416   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:a1:0c", ip: ""} in network mk-addons-20210810221736-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:17:51 +0000 UTC Type:0 Mac:52:54:00:f8:a1:0c Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:addons-20210810221736-30291 Clientid:01:52:54:00:f8:a1:0c}
	I0810 22:18:47.933447   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined IP address 192.168.50.30 and MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:18:47.933600   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHPort
	I0810 22:18:47.933787   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHKeyPath
	I0810 22:18:47.933974   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHUsername
	I0810 22:18:47.934151   30643 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210810221736-30291/id_rsa Username:docker}
	I0810 22:18:47.937504   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:18:47.937890   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:a1:0c", ip: ""} in network mk-addons-20210810221736-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:17:51 +0000 UTC Type:0 Mac:52:54:00:f8:a1:0c Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:addons-20210810221736-30291 Clientid:01:52:54:00:f8:a1:0c}
	I0810 22:18:47.937918   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined IP address 192.168.50.30 and MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:18:47.938051   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHPort
	I0810 22:18:47.938214   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHKeyPath
	I0810 22:18:47.938379   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHUsername
	I0810 22:18:47.938518   30643 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210810221736-30291/id_rsa Username:docker}
	I0810 22:18:47.939128   30643 addons.go:275] installing /etc/kubernetes/addons/crds.yaml
	I0810 22:18:47.939152   30643 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/crds.yaml (825331 bytes)
	I0810 22:18:47.939175   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHHostname
	I0810 22:18:47.944578   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:18:47.944960   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:a1:0c", ip: ""} in network mk-addons-20210810221736-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:17:51 +0000 UTC Type:0 Mac:52:54:00:f8:a1:0c Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:addons-20210810221736-30291 Clientid:01:52:54:00:f8:a1:0c}
	I0810 22:18:47.944990   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | domain addons-20210810221736-30291 has defined IP address 192.168.50.30 and MAC address 52:54:00:f8:a1:0c in network mk-addons-20210810221736-30291
	I0810 22:18:47.945147   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHPort
	I0810 22:18:47.945340   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHKeyPath
	I0810 22:18:47.945498   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .GetSSHUsername
	I0810 22:18:47.945616   30643 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210810221736-30291/id_rsa Username:docker}
	I0810 22:18:48.409323   30643 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0810 22:18:48.430992   30643 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0810 22:18:48.431016   30643 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0810 22:18:48.469494   30643 addons.go:275] installing /etc/kubernetes/addons/registry-svc.yaml
	I0810 22:18:48.469527   30643 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0810 22:18:48.503028   30643 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0810 22:18:48.503053   30643 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1931 bytes)
	I0810 22:18:48.512689   30643 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0810 22:18:48.512708   30643 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0810 22:18:48.542067   30643 addons.go:275] installing /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml
	I0810 22:18:48.542122   30643 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml (2203 bytes)
	I0810 22:18:48.565065   30643 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0810 22:18:48.569074   30643 addons.go:275] installing /etc/kubernetes/addons/olm.yaml
	I0810 22:18:48.569092   30643 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/olm.yaml (9882 bytes)
	I0810 22:18:48.596228   30643 addons.go:275] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0810 22:18:48.596255   30643 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0810 22:18:48.610893   30643 addons.go:275] installing /etc/kubernetes/addons/ingress-rbac.yaml
	I0810 22:18:48.610916   30643 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-rbac.yaml (6005 bytes)
	I0810 22:18:48.670180   30643 addons.go:275] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0810 22:18:48.670210   30643 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (950 bytes)
	I0810 22:18:48.712756   30643 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0810 22:18:48.712784   30643 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0810 22:18:48.737377   30643 addons.go:275] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0810 22:18:48.737399   30643 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3037 bytes)
	I0810 22:18:48.739722   30643 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0810 22:18:48.739741   30643 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19584 bytes)
	I0810 22:18:48.792780   30643 addons.go:275] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0810 22:18:48.792804   30643 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0810 22:18:48.816868   30643 addons.go:275] installing /etc/kubernetes/addons/ingress-dp.yaml
	I0810 22:18:48.816888   30643 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-dp.yaml (9394 bytes)
	I0810 22:18:48.924607   30643 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
	I0810 22:18:48.945821   30643 addons.go:275] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0810 22:18:48.945847   30643 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3428 bytes)
	I0810 22:18:48.984340   30643 addons.go:275] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0810 22:18:48.984374   30643 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (3666 bytes)
	I0810 22:18:49.010369   30643 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0810 22:18:49.024026   30643 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0810 22:18:49.024051   30643 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0810 22:18:49.088175   30643 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml
	I0810 22:18:49.092241   30643 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0810 22:18:49.106471   30643 addons.go:275] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0810 22:18:49.106493   30643 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1071 bytes)
	I0810 22:18:49.122638   30643 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.31044879s)
	I0810 22:18:49.122701   30643 ssh_runner.go:189] Completed: sudo systemctl is-active --quiet service kubelet: (1.308420783s)
	I0810 22:18:49.122802   30643 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0810 22:18:49.125199   30643 node_ready.go:35] waiting up to 6m0s for node "addons-20210810221736-30291" to be "Ready" ...
	I0810 22:18:49.129728   30643 node_ready.go:49] node "addons-20210810221736-30291" has status "Ready":"True"
	I0810 22:18:49.129751   30643 node_ready.go:38] duration metric: took 4.522135ms waiting for node "addons-20210810221736-30291" to be "Ready" ...
	I0810 22:18:49.129761   30643 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0810 22:18:49.153433   30643 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-fqn7v" in "kube-system" namespace to be "Ready" ...
	I0810 22:18:49.203230   30643 addons.go:275] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0810 22:18:49.203267   30643 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2944 bytes)
	I0810 22:18:49.228535   30643 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0810 22:18:49.233174   30643 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0810 22:18:49.311793   30643 addons.go:275] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0810 22:18:49.311823   30643 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3194 bytes)
	I0810 22:18:49.444829   30643 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0810 22:18:49.444855   30643 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2421 bytes)
	I0810 22:18:49.586598   30643 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0810 22:18:49.586627   30643 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1034 bytes)
	I0810 22:18:49.740483   30643 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0810 22:18:49.740511   30643 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (6710 bytes)
	I0810 22:18:50.204716   30643 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-provisioner.yaml
	I0810 22:18:50.204741   30643 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-provisioner.yaml (2555 bytes)
	I0810 22:18:50.413273   30643 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0810 22:18:50.413300   30643 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2469 bytes)
	I0810 22:18:50.601123   30643 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml
	I0810 22:18:50.601151   30643 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml (2555 bytes)
	I0810 22:18:50.778410   30643 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0810 22:18:50.778445   30643 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0810 22:18:50.894329   30643 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0810 22:18:51.087091   30643 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.677725234s)
	I0810 22:18:51.087232   30643 main.go:130] libmachine: Making call to close driver server
	I0810 22:18:51.087254   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .Close
	I0810 22:18:51.087536   30643 main.go:130] libmachine: Successfully made call to close driver server
	I0810 22:18:51.087616   30643 main.go:130] libmachine: Making call to close connection to plugin binary
	I0810 22:18:51.087621   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | Closing plugin on server side
	I0810 22:18:51.087636   30643 main.go:130] libmachine: Making call to close driver server
	I0810 22:18:51.087656   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .Close
	I0810 22:18:51.087887   30643 main.go:130] libmachine: Successfully made call to close driver server
	I0810 22:18:51.087904   30643 main.go:130] libmachine: Making call to close connection to plugin binary
	I0810 22:18:51.087916   30643 main.go:130] libmachine: Making call to close driver server
	I0810 22:18:51.087934   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .Close
	I0810 22:18:51.087935   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | Closing plugin on server side
	I0810 22:18:51.088214   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | Closing plugin on server side
	I0810 22:18:51.088245   30643 main.go:130] libmachine: Successfully made call to close driver server
	I0810 22:18:51.088267   30643 main.go:130] libmachine: Making call to close connection to plugin binary
	I0810 22:18:51.173272   30643 pod_ready.go:102] pod "coredns-558bd4d5db-fqn7v" in "kube-system" namespace has status "Ready":"False"
	I0810 22:18:51.427304   30643 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.862194639s)
	I0810 22:18:51.427353   30643 main.go:130] libmachine: Making call to close driver server
	I0810 22:18:51.427365   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .Close
	I0810 22:18:51.427706   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | Closing plugin on server side
	I0810 22:18:51.427818   30643 main.go:130] libmachine: Successfully made call to close driver server
	I0810 22:18:51.427843   30643 main.go:130] libmachine: Making call to close connection to plugin binary
	I0810 22:18:51.427851   30643 main.go:130] libmachine: Making call to close driver server
	I0810 22:18:51.427861   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .Close
	I0810 22:18:51.428175   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | Closing plugin on server side
	I0810 22:18:51.428222   30643 main.go:130] libmachine: Successfully made call to close driver server
	I0810 22:18:51.428236   30643 main.go:130] libmachine: Making call to close connection to plugin binary
	I0810 22:18:53.339834   30643 pod_ready.go:102] pod "coredns-558bd4d5db-fqn7v" in "kube-system" namespace has status "Ready":"False"
	I0810 22:18:53.697925   30643 pod_ready.go:92] pod "coredns-558bd4d5db-fqn7v" in "kube-system" namespace has status "Ready":"True"
	I0810 22:18:53.697954   30643 pod_ready.go:81] duration metric: took 4.544488615s waiting for pod "coredns-558bd4d5db-fqn7v" in "kube-system" namespace to be "Ready" ...
	I0810 22:18:53.697969   30643 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-20210810221736-30291" in "kube-system" namespace to be "Ready" ...
	I0810 22:18:53.711337   30643 pod_ready.go:92] pod "etcd-addons-20210810221736-30291" in "kube-system" namespace has status "Ready":"True"
	I0810 22:18:53.711353   30643 pod_ready.go:81] duration metric: took 13.375761ms waiting for pod "etcd-addons-20210810221736-30291" in "kube-system" namespace to be "Ready" ...
	I0810 22:18:53.711362   30643 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-20210810221736-30291" in "kube-system" namespace to be "Ready" ...
	I0810 22:18:53.717358   30643 pod_ready.go:92] pod "kube-apiserver-addons-20210810221736-30291" in "kube-system" namespace has status "Ready":"True"
	I0810 22:18:53.717371   30643 pod_ready.go:81] duration metric: took 6.003497ms waiting for pod "kube-apiserver-addons-20210810221736-30291" in "kube-system" namespace to be "Ready" ...
	I0810 22:18:53.717379   30643 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-20210810221736-30291" in "kube-system" namespace to be "Ready" ...
	I0810 22:18:53.730222   30643 pod_ready.go:92] pod "kube-controller-manager-addons-20210810221736-30291" in "kube-system" namespace has status "Ready":"True"
	I0810 22:18:53.730235   30643 pod_ready.go:81] duration metric: took 12.849887ms waiting for pod "kube-controller-manager-addons-20210810221736-30291" in "kube-system" namespace to be "Ready" ...
	I0810 22:18:53.730244   30643 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-d4lb4" in "kube-system" namespace to be "Ready" ...
	I0810 22:18:53.757274   30643 pod_ready.go:92] pod "kube-proxy-d4lb4" in "kube-system" namespace has status "Ready":"True"
	I0810 22:18:53.757289   30643 pod_ready.go:81] duration metric: took 27.039619ms waiting for pod "kube-proxy-d4lb4" in "kube-system" namespace to be "Ready" ...
	I0810 22:18:53.757298   30643 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-20210810221736-30291" in "kube-system" namespace to be "Ready" ...
	I0810 22:18:54.078073   30643 pod_ready.go:92] pod "kube-scheduler-addons-20210810221736-30291" in "kube-system" namespace has status "Ready":"True"
	I0810 22:18:54.078111   30643 pod_ready.go:81] duration metric: took 320.804183ms waiting for pod "kube-scheduler-addons-20210810221736-30291" in "kube-system" namespace to be "Ready" ...
	I0810 22:18:54.078125   30643 pod_ready.go:38] duration metric: took 4.948346532s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0810 22:18:54.078151   30643 api_server.go:50] waiting for apiserver process to appear ...
	I0810 22:18:54.078208   30643 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0810 22:18:57.190609   30643 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (8.265956623s)
	W0810 22:18:57.190669   30643 addons.go:296] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
	namespace/olm created
	namespace/operators created
	serviceaccount/olm-operator-serviceaccount created
	clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
	clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
	deployment.apps/olm-operator created
	deployment.apps/catalog-operator created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
	I0810 22:18:57.190722   30643 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
	namespace/olm created
	namespace/operators created
	serviceaccount/olm-operator-serviceaccount created
	clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
	clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
	deployment.apps/olm-operator created
	deployment.apps/catalog-operator created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
	I0810 22:18:57.190608   30643 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.180198605s)
	I0810 22:18:57.190768   30643 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml: (8.102552161s)
	I0810 22:18:57.190801   30643 main.go:130] libmachine: Making call to close driver server
	I0810 22:18:57.190823   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .Close
	I0810 22:18:57.190846   30643 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.098574371s)
	I0810 22:18:57.190873   30643 main.go:130] libmachine: Making call to close driver server
	I0810 22:18:57.190887   30643 main.go:130] libmachine: Making call to close driver server
	I0810 22:18:57.190895   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .Close
	I0810 22:18:57.190900   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .Close
	I0810 22:18:57.190908   30643 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.068073546s)
	I0810 22:18:57.190927   30643 start.go:736] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS
	I0810 22:18:57.191025   30643 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.962456861s)
	I0810 22:18:57.191326   30643 main.go:130] libmachine: Making call to close driver server
	I0810 22:18:57.191345   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .Close
	I0810 22:18:57.191103   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | Closing plugin on server side
	I0810 22:18:57.191139   30643 main.go:130] libmachine: Successfully made call to close driver server
	I0810 22:18:57.191402   30643 main.go:130] libmachine: Making call to close connection to plugin binary
	I0810 22:18:57.191412   30643 main.go:130] libmachine: Making call to close driver server
	I0810 22:18:57.191424   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .Close
	I0810 22:18:57.191156   30643 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.957948088s)
	W0810 22:18:57.191532   30643 addons.go:296] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	I0810 22:18:57.191590   30643 retry.go:31] will retry after 360.127272ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	I0810 22:18:57.191167   30643 main.go:130] libmachine: Successfully made call to close driver server
	I0810 22:18:57.191681   30643 main.go:130] libmachine: Making call to close connection to plugin binary
	I0810 22:18:57.191726   30643 main.go:130] libmachine: Making call to close driver server
	I0810 22:18:57.191774   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .Close
	I0810 22:18:57.191188   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | Closing plugin on server side
	I0810 22:18:57.191253   30643 main.go:130] libmachine: Successfully made call to close driver server
	I0810 22:18:57.191853   30643 main.go:130] libmachine: Making call to close connection to plugin binary
	I0810 22:18:57.191866   30643 main.go:130] libmachine: Making call to close driver server
	I0810 22:18:57.191875   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .Close
	I0810 22:18:57.191282   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | Closing plugin on server side
	I0810 22:18:57.193829   30643 main.go:130] libmachine: Successfully made call to close driver server
	I0810 22:18:57.193850   30643 main.go:130] libmachine: Making call to close connection to plugin binary
	I0810 22:18:57.193896   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | Closing plugin on server side
	I0810 22:18:57.193901   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | Closing plugin on server side
	I0810 22:18:57.193911   30643 main.go:130] libmachine: Successfully made call to close driver server
	I0810 22:18:57.193923   30643 main.go:130] libmachine: Successfully made call to close driver server
	I0810 22:18:57.193925   30643 main.go:130] libmachine: Making call to close connection to plugin binary
	I0810 22:18:57.193935   30643 main.go:130] libmachine: Successfully made call to close driver server
	I0810 22:18:57.193938   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | Closing plugin on server side
	I0810 22:18:57.193948   30643 main.go:130] libmachine: Making call to close connection to plugin binary
	I0810 22:18:57.193937   30643 main.go:130] libmachine: Making call to close driver server
	I0810 22:18:57.193975   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .Close
	I0810 22:18:57.193936   30643 main.go:130] libmachine: Making call to close connection to plugin binary
	I0810 22:18:57.194014   30643 addons.go:313] Verifying addon ingress=true in "addons-20210810221736-30291"
	I0810 22:18:57.193959   30643 addons.go:313] Verifying addon registry=true in "addons-20210810221736-30291"
	I0810 22:18:57.196066   30643 out.go:177] * Verifying registry addon...
	I0810 22:18:57.198004   30643 out.go:177] * Verifying ingress addon...
	I0810 22:18:57.194228   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | Closing plugin on server side
	I0810 22:18:57.194232   30643 main.go:130] libmachine: Successfully made call to close driver server
	I0810 22:18:57.198111   30643 main.go:130] libmachine: Making call to close connection to plugin binary
	I0810 22:18:57.198130   30643 addons.go:313] Verifying addon metrics-server=true in "addons-20210810221736-30291"
	I0810 22:18:57.198774   30643 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0810 22:18:57.200219   30643 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0810 22:18:57.318478   30643 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0810 22:18:57.318509   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:18:57.361588   30643 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0810 22:18:57.361617   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:18:57.467194   30643 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
	I0810 22:18:57.552787   30643 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0810 22:18:57.825175   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:18:57.872873   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:18:58.398621   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:18:58.511295   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:18:58.900283   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:18:58.977175   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:18:59.331257   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:18:59.358404   30643 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.464019923s)
	I0810 22:18:59.358463   30643 main.go:130] libmachine: Making call to close driver server
	I0810 22:18:59.358480   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .Close
	I0810 22:18:59.358525   30643 ssh_runner.go:189] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.280291819s)
	I0810 22:18:59.358559   30643 api_server.go:70] duration metric: took 11.546399473s to wait for apiserver process to appear ...
	I0810 22:18:59.358569   30643 api_server.go:86] waiting for apiserver healthz status ...
	I0810 22:18:59.358580   30643 api_server.go:239] Checking apiserver healthz at https://192.168.50.30:8443/healthz ...
	I0810 22:18:59.358762   30643 main.go:130] libmachine: Successfully made call to close driver server
	I0810 22:18:59.358802   30643 main.go:130] libmachine: Making call to close connection to plugin binary
	I0810 22:18:59.358829   30643 main.go:130] libmachine: Making call to close driver server
	I0810 22:18:59.358844   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .Close
	I0810 22:18:59.359195   30643 main.go:130] libmachine: Successfully made call to close driver server
	I0810 22:18:59.359211   30643 main.go:130] libmachine: Making call to close connection to plugin binary
	I0810 22:18:59.359223   30643 addons.go:313] Verifying addon csi-hostpath-driver=true in "addons-20210810221736-30291"
	I0810 22:18:59.360986   30643 out.go:177] * Verifying csi-hostpath-driver addon...
	I0810 22:18:59.362747   30643 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0810 22:18:59.387759   30643 api_server.go:265] https://192.168.50.30:8443/healthz returned 200:
	ok
	I0810 22:18:59.388351   30643 kapi.go:86] Found 5 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0810 22:18:59.388369   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:18:59.388783   30643 api_server.go:139] control plane version: v1.21.3
	I0810 22:18:59.388801   30643 api_server.go:129] duration metric: took 30.226133ms to wait for apiserver health ...
	I0810 22:18:59.388955   30643 system_pods.go:43] waiting for kube-system pods to appear ...
	I0810 22:18:59.392224   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:18:59.399756   30643 system_pods.go:59] 18 kube-system pods found
	I0810 22:18:59.399810   30643 system_pods.go:61] "coredns-558bd4d5db-fqn7v" [f7785682-711b-4fb6-8eac-10af32eb4197] Running
	I0810 22:18:59.399826   30643 system_pods.go:61] "csi-hostpath-attacher-0" [3117dd86-287f-46d8-9de9-ffdf1e921c4d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) didn't match pod affinity rules, 1 node(s) didn't match pod affinity/anti-affinity rules.)
	I0810 22:18:59.399840   30643 system_pods.go:61] "csi-hostpath-provisioner-0" [eaf65610-036f-4897-9b82-5cbb0f091bd1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-provisioner])
	I0810 22:18:59.399863   30643 system_pods.go:61] "csi-hostpath-resizer-0" [a9999058-9186-4219-8d84-2ebcefe6b82c] Pending
	I0810 22:18:59.399883   30643 system_pods.go:61] "csi-hostpath-snapshotter-0" [31a1c5ee-daf4-4083-a0b3-b6a5328a51e3] Pending
	I0810 22:18:59.399906   30643 system_pods.go:61] "csi-hostpathplugin-0" [24fed96f-1492-487a-a67d-7bfbcf55c1d1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-agent csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-agent csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe])
	I0810 22:18:59.399913   30643 system_pods.go:61] "etcd-addons-20210810221736-30291" [e2722ec9-c3e3-452a-9225-e855f5722e52] Running
	I0810 22:18:59.399921   30643 system_pods.go:61] "kube-apiserver-addons-20210810221736-30291" [c85f23ae-9f42-4ebc-b93b-7b006079e895] Running
	I0810 22:18:59.399929   30643 system_pods.go:61] "kube-controller-manager-addons-20210810221736-30291" [14eb5f4b-4740-470f-b2a4-4629aedcc278] Running
	I0810 22:18:59.399935   30643 system_pods.go:61] "kube-proxy-d4lb4" [7c768b33-9c26-4a9c-b300-dfba025b2acb] Running
	I0810 22:18:59.399946   30643 system_pods.go:61] "kube-scheduler-addons-20210810221736-30291" [c0c74e46-3d57-4097-814e-085bffaba349] Running
	I0810 22:18:59.399956   30643 system_pods.go:61] "metrics-server-77c99ccb96-dr6pm" [a3519db9-c366-43f2-bc64-14fa8206ceee] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0810 22:18:59.399964   30643 system_pods.go:61] "registry-bzgg5" [a35be290-e994-4568-af3f-633135f23d51] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0810 22:18:59.399978   30643 system_pods.go:61] "registry-proxy-msxjj" [e874a500-5631-4f32-a4f5-0b41e6ba7964] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0810 22:18:59.399987   30643 system_pods.go:61] "snapshot-controller-989f9ddc8-8ccvf" [68bcf55a-7b6f-47f7-87d6-3c79fad793fb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0810 22:18:59.400002   30643 system_pods.go:61] "snapshot-controller-989f9ddc8-pzfj6" [52d9722d-0ea5-4f1a-a69b-53a3ad4cfd20] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0810 22:18:59.400014   30643 system_pods.go:61] "storage-provisioner" [a21dae5f-beaf-40c2-a62b-1deec782e8ac] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0810 22:18:59.400023   30643 system_pods.go:61] "tiller-deploy-768d69497-5cmnh" [3e7a3e5d-ad13-4423-8a48-5780f711aabf] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0810 22:18:59.400032   30643 system_pods.go:74] duration metric: took 11.054537ms to wait for pod list to return data ...
	I0810 22:18:59.400043   30643 default_sa.go:34] waiting for default service account to be created ...
	I0810 22:18:59.419068   30643 default_sa.go:45] found service account: "default"
	I0810 22:18:59.419089   30643 default_sa.go:55] duration metric: took 19.039873ms for default service account to be created ...
	I0810 22:18:59.419098   30643 system_pods.go:116] waiting for k8s-apps to be running ...
	I0810 22:18:59.433935   30643 system_pods.go:86] 18 kube-system pods found
	I0810 22:18:59.433969   30643 system_pods.go:89] "coredns-558bd4d5db-fqn7v" [f7785682-711b-4fb6-8eac-10af32eb4197] Running
	I0810 22:18:59.433980   30643 system_pods.go:89] "csi-hostpath-attacher-0" [3117dd86-287f-46d8-9de9-ffdf1e921c4d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) didn't match pod affinity rules, 1 node(s) didn't match pod affinity/anti-affinity rules.)
	I0810 22:18:59.433991   30643 system_pods.go:89] "csi-hostpath-provisioner-0" [eaf65610-036f-4897-9b82-5cbb0f091bd1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-provisioner])
	I0810 22:18:59.434001   30643 system_pods.go:89] "csi-hostpath-resizer-0" [a9999058-9186-4219-8d84-2ebcefe6b82c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0810 22:18:59.434008   30643 system_pods.go:89] "csi-hostpath-snapshotter-0" [31a1c5ee-daf4-4083-a0b3-b6a5328a51e3] Pending
	I0810 22:18:59.434019   30643 system_pods.go:89] "csi-hostpathplugin-0" [24fed96f-1492-487a-a67d-7bfbcf55c1d1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-agent csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-agent csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe])
	I0810 22:18:59.434028   30643 system_pods.go:89] "etcd-addons-20210810221736-30291" [e2722ec9-c3e3-452a-9225-e855f5722e52] Running
	I0810 22:18:59.434042   30643 system_pods.go:89] "kube-apiserver-addons-20210810221736-30291" [c85f23ae-9f42-4ebc-b93b-7b006079e895] Running
	I0810 22:18:59.434050   30643 system_pods.go:89] "kube-controller-manager-addons-20210810221736-30291" [14eb5f4b-4740-470f-b2a4-4629aedcc278] Running
	I0810 22:18:59.434056   30643 system_pods.go:89] "kube-proxy-d4lb4" [7c768b33-9c26-4a9c-b300-dfba025b2acb] Running
	I0810 22:18:59.434064   30643 system_pods.go:89] "kube-scheduler-addons-20210810221736-30291" [c0c74e46-3d57-4097-814e-085bffaba349] Running
	I0810 22:18:59.434073   30643 system_pods.go:89] "metrics-server-77c99ccb96-dr6pm" [a3519db9-c366-43f2-bc64-14fa8206ceee] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0810 22:18:59.434090   30643 system_pods.go:89] "registry-bzgg5" [a35be290-e994-4568-af3f-633135f23d51] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0810 22:18:59.434099   30643 system_pods.go:89] "registry-proxy-msxjj" [e874a500-5631-4f32-a4f5-0b41e6ba7964] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0810 22:18:59.434108   30643 system_pods.go:89] "snapshot-controller-989f9ddc8-8ccvf" [68bcf55a-7b6f-47f7-87d6-3c79fad793fb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0810 22:18:59.434122   30643 system_pods.go:89] "snapshot-controller-989f9ddc8-pzfj6" [52d9722d-0ea5-4f1a-a69b-53a3ad4cfd20] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0810 22:18:59.434158   30643 system_pods.go:89] "storage-provisioner" [a21dae5f-beaf-40c2-a62b-1deec782e8ac] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0810 22:18:59.434175   30643 system_pods.go:89] "tiller-deploy-768d69497-5cmnh" [3e7a3e5d-ad13-4423-8a48-5780f711aabf] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0810 22:18:59.434184   30643 system_pods.go:126] duration metric: took 15.081738ms to wait for k8s-apps to be running ...
	I0810 22:18:59.434197   30643 system_svc.go:44] waiting for kubelet service to be running ....
	I0810 22:18:59.434251   30643 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0810 22:18:59.833932   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:18:59.917195   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:18:59.930162   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:00.324346   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:00.367690   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:00.403814   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:00.833824   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:00.873131   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:00.906410   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:01.324921   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:01.369867   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:01.398444   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:01.824902   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:01.889874   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:01.902253   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:02.348181   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:02.432334   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:02.469798   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:02.856470   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:02.871778   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:02.902732   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:03.051890   30643 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.499056021s)
	I0810 22:19:03.051915   30643 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (5.584673011s)
	I0810 22:19:03.051938   30643 main.go:130] libmachine: Making call to close driver server
	I0810 22:19:03.051953   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .Close
	I0810 22:19:03.051990   30643 ssh_runner.go:189] Completed: sudo systemctl is-active --quiet service kubelet: (3.61771635s)
	I0810 22:19:03.052020   30643 system_svc.go:56] duration metric: took 3.61782031s WaitForService to wait for kubelet.
	I0810 22:19:03.051956   30643 main.go:130] libmachine: Making call to close driver server
	I0810 22:19:03.052073   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .Close
	I0810 22:19:03.052031   30643 kubeadm.go:547] duration metric: took 15.239873798s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0810 22:19:03.052123   30643 node_conditions.go:102] verifying NodePressure condition ...
	I0810 22:19:03.052260   30643 main.go:130] libmachine: Successfully made call to close driver server
	I0810 22:19:03.052278   30643 main.go:130] libmachine: Making call to close connection to plugin binary
	I0810 22:19:03.052289   30643 main.go:130] libmachine: Making call to close driver server
	I0810 22:19:03.052308   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .Close
	I0810 22:19:03.052317   30643 main.go:130] libmachine: Successfully made call to close driver server
	I0810 22:19:03.052330   30643 main.go:130] libmachine: Making call to close connection to plugin binary
	I0810 22:19:03.052340   30643 main.go:130] libmachine: Making call to close driver server
	I0810 22:19:03.052354   30643 main.go:130] libmachine: (addons-20210810221736-30291) Calling .Close
	I0810 22:19:03.052646   30643 main.go:130] libmachine: Successfully made call to close driver server
	I0810 22:19:03.052676   30643 main.go:130] libmachine: Making call to close connection to plugin binary
	I0810 22:19:03.052698   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | Closing plugin on server side
	I0810 22:19:03.052712   30643 main.go:130] libmachine: (addons-20210810221736-30291) DBG | Closing plugin on server side
	I0810 22:19:03.052808   30643 main.go:130] libmachine: Successfully made call to close driver server
	I0810 22:19:03.052827   30643 main.go:130] libmachine: Making call to close connection to plugin binary
	I0810 22:19:03.061939   30643 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0810 22:19:03.061974   30643 node_conditions.go:123] node cpu capacity is 2
	I0810 22:19:03.061990   30643 node_conditions.go:105] duration metric: took 9.85979ms to run NodePressure ...
	I0810 22:19:03.062006   30643 start.go:231] waiting for startup goroutines ...
	I0810 22:19:03.323805   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:03.372030   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:03.403488   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:03.826325   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:03.867022   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:03.897645   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:04.325944   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:04.369318   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:04.399497   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:04.824179   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:04.875728   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:04.897804   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:05.329278   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:05.368906   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:05.398415   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:05.827603   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:05.866910   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:05.894531   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:06.329480   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:06.367671   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:06.396205   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:06.824787   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:06.869839   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:06.894807   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:07.324789   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:07.366752   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:07.397132   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:07.822585   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:07.868318   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:07.900519   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:08.323357   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:08.366480   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:08.394954   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:08.824965   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:08.872968   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:08.901577   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:09.323671   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:09.368009   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:09.394180   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:09.823861   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:09.867526   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:09.900890   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:10.325761   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:10.366891   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:10.394961   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:10.825169   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:10.867400   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:10.900782   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:11.329695   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:11.369812   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:11.401978   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:11.824588   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:11.866848   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:11.896249   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:12.324150   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:12.370647   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:12.395745   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:12.825425   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:12.874023   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:12.896731   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:13.326858   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:13.367840   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:13.395100   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:14.023918   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:14.042835   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:14.043008   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:14.323599   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:14.379747   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:14.396917   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:14.839803   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:14.866392   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:14.894128   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:15.323697   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:15.375543   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:15.402443   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:15.824469   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:15.866444   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:15.894479   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:16.323851   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:16.368382   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:16.396611   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:16.827844   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:16.870628   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:16.900373   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:17.333185   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:17.366337   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:17.395594   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:17.824924   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:17.868987   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:17.903015   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:18.324554   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:18.369640   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:18.396893   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:18.824931   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:18.867273   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:18.893977   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:19.330091   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:19.371649   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:19.401187   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:19.823807   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:19.869652   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:19.913118   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:20.324784   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:20.368704   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:20.398278   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:20.922272   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:20.930205   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:20.937669   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:21.332506   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:21.376689   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:21.400170   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:21.824071   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:21.866099   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:21.893730   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:22.694092   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:22.697753   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:22.698486   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:22.823452   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:22.869524   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:22.896059   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:23.324851   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:23.369597   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:23.401579   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:23.829400   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:23.869390   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:23.896555   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:24.329826   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:24.368391   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:24.395800   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:24.825580   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:24.871586   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:24.901029   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:25.322577   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:25.368840   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:25.395872   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:25.823501   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:25.865971   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:25.895288   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:26.323914   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:26.367116   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:26.395647   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:26.823472   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:26.868874   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:26.897342   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:27.324543   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:27.373066   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:27.394949   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:27.826744   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:27.866300   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:27.895579   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:28.322826   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:28.366249   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:28.394243   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:28.822928   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:28.866619   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:28.898599   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:29.324014   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:29.371265   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:29.402253   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:29.823203   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:29.866162   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:29.895491   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:30.335375   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:30.366481   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:30.414434   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:30.823140   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:30.871104   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:30.903893   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:31.324705   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:31.370201   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:31.396501   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:31.840857   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:31.868432   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:31.893957   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:32.323085   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:32.366726   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:32.395408   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:32.825187   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:32.901759   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:32.913779   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:33.328422   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:33.368191   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:33.397852   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:33.826424   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:33.866243   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:33.895793   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:34.329536   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:34.367931   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:34.396936   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:34.824144   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:34.871827   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:34.896277   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:35.326398   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:35.368704   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:35.395950   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:35.835280   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:35.866441   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:35.894401   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:36.327905   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:36.367482   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:36.396201   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:36.823683   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:36.865552   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:36.894177   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:37.330968   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:37.367763   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:37.393620   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:37.825567   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:37.866233   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:37.894274   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:38.324053   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:38.640088   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:38.640539   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:38.823899   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:38.867010   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:38.896468   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:39.324175   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:39.366573   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:39.395712   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:39.824022   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:39.867038   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:39.900100   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:40.322895   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:40.366901   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:40.393891   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:40.824779   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:40.870428   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:40.895409   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:41.324251   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:41.366620   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:41.395525   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:41.829919   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:41.866740   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:41.895275   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:42.323743   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:42.366676   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:42.395801   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:42.830630   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:42.867638   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:42.900794   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:43.323039   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:43.368483   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:43.394153   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:43.823020   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:43.868104   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:43.895656   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:44.322980   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:44.368204   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:44.396604   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:44.827961   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:44.867549   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:44.900483   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:45.324453   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:45.389917   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:45.406997   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:45.824753   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:45.867803   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:45.896817   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:46.325991   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:46.372387   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:46.396728   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:46.822506   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:46.874061   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:46.901121   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:47.323622   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:47.366802   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:47.393587   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:47.852463   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:47.866549   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:47.898659   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:48.323502   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:48.366468   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:48.395119   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:48.824003   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:48.866444   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:48.894378   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:49.324252   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:49.368347   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:49.396080   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:49.823851   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:49.867978   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:49.900693   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:50.326903   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:50.371402   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:50.394438   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:50.824058   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:50.867678   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:50.894876   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:51.332187   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:51.387833   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:51.407342   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:51.823635   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:51.867950   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:51.895281   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:52.330531   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:52.366895   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:52.395462   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:52.835197   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:52.869452   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:52.895175   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:53.323796   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:53.366469   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:53.397129   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:53.825344   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:53.866780   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:53.895563   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:54.324007   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:54.384262   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:54.399796   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:54.823928   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:54.874653   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:54.896653   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:55.323935   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:55.367236   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:55.394025   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:55.826159   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:55.872584   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:55.895245   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:56.322465   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:56.366403   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:56.395327   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:56.825564   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:56.866305   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:56.894348   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:57.325516   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:57.365990   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:57.394998   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:57.824823   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:57.878104   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:57.901506   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:58.326011   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:58.369426   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:58.395941   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:58.826042   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:58.868965   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:58.895658   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:59.327763   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:59.369165   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:59.395037   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:19:59.825164   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:19:59.867399   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:19:59.894332   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:00.323549   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:00.370325   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:20:00.395608   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:00.824519   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:00.866245   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:20:00.898479   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:01.323496   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:01.371843   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:20:01.399023   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:01.823697   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:01.867004   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:20:01.900545   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:02.322793   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:02.371891   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:20:02.398859   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:02.823425   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:02.870754   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:20:02.894909   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:03.324771   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:03.369001   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:20:03.394511   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:03.823063   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:03.880965   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:20:03.894938   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:04.338706   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:04.378858   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0810 22:20:04.397452   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:04.828897   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:04.866487   30643 kapi.go:108] duration metric: took 1m7.667710535s to wait for kubernetes.io/minikube-addons=registry ...
	I0810 22:20:04.896865   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:05.327330   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:05.395509   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:05.824060   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:05.894115   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:06.324607   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:06.394363   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:06.823900   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:06.899403   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:07.323701   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:07.399250   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:07.828477   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:07.903029   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:08.324178   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:08.395344   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:08.823993   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:08.900760   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:09.322860   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:09.395516   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:09.823653   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:09.897239   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:10.326606   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:10.401265   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:11.491246   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:11.491460   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:11.823182   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:11.895949   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:12.327136   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:12.395073   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:12.823853   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:12.893735   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:13.324751   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:13.396235   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:13.823944   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:13.897124   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:14.979141   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:14.979241   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:15.323213   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:15.397870   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:15.840686   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:15.896038   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:16.323264   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:16.406010   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:16.823008   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:16.898751   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:17.358755   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:17.395487   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:17.825107   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:17.895264   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:18.328250   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:18.395995   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:18.822932   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:18.895447   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:19.323204   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:19.399629   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:19.825262   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:19.902242   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:20.326362   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:20.395631   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:20.852394   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:20.899014   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:21.323526   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:21.394639   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:21.824162   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:21.895589   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:22.325322   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:22.396222   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:22.824855   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:22.900842   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:23.325014   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:23.394753   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:23.824391   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:23.894781   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:24.323955   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:24.405814   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:24.825108   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:24.895514   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:25.323940   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:25.396346   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:25.823351   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:25.895321   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:26.324480   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:26.394973   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:26.823814   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:26.895473   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:27.330537   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:27.395239   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:27.834233   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:27.900777   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:28.326333   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:28.400437   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:28.824072   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:28.894772   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:29.324603   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:29.396382   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:29.927061   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:29.928213   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:30.327052   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:30.395135   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:30.823864   30643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0810 22:20:30.907964   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:31.324477   30643 kapi.go:108] duration metric: took 1m34.12425244s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0810 22:20:31.395262   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:31.895603   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:32.405154   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:32.900810   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:33.396327   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:33.899849   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:34.397653   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:34.896811   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:35.398464   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:35.897055   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:36.396144   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:36.899687   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:37.397407   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:37.894232   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:38.397187   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:38.902156   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:39.395760   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:39.895767   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:40.396276   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:40.896571   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:41.394150   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:41.906118   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:42.397809   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:42.896322   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:43.396529   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:43.895264   30643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0810 22:20:44.396935   30643 kapi.go:108] duration metric: took 1m45.034184851s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0810 22:20:44.399039   30643 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, helm-tiller, metrics-server, volumesnapshots, olm, registry, ingress, csi-hostpath-driver
	I0810 22:20:44.399064   30643 addons.go:344] enableAddons completed in 1m56.586879163s
	I0810 22:20:44.446199   30643 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0810 22:20:44.448094   30643 out.go:177] * Done! kubectl is now configured to use "addons-20210810221736-30291" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Tue 2021-08-10 22:17:47 UTC, end at Tue 2021-08-10 22:25:51 UTC. --
	Aug 10 22:25:50 addons-20210810221736-30291 crio[2072]: time="2021-08-10 22:25:50.938523350Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=77e920bb-aa64-4a80-bae0-b8601448ba35 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:25:50 addons-20210810221736-30291 crio[2072]: time="2021-08-10 22:25:50.942061689Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b061b156acfdc296f16162882a4dd3fe7679683c7c4efe7d9534dc097eba7a3,PodSandboxId:3f895a24a6665a42aa2953ccb0f7cf3479d8059edead3756b1416ea51d2a1bc8,Metadata:&ContainerMetadata{Name:etcd-restore-operator,Attempt:0,},Image:&ImageSpec{Image:9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628634144788907572,Labels:map[string]string{io.kubernetes.container.name: etcd-restore-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-2jqws,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: 813a1198-4b83-4df1-99d2-f935cf883e6c,},Annotations:map[string]string{io.kubernetes.container.hash: 45a86a81,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:899fd22ea7edeb5db64a4c147a1c62a417b78fd6a18e926a30ab4ab51e4f44d9,PodSandboxId:3f895a24a6665a42aa2953ccb0f7cf3479d8059edead3756b1416ea51d2a1bc8,Metadata:&ContainerMetadata{Name:etcd-backup-operator,Attempt:0,},Image:&ImageSpec{Image:9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628634144377145686,Labels:map[string]string{io.kubernetes.container.name: etcd-backup-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-2jqws,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: 813a1198-4b83-4df1-99d2-f935cf883e6c,},Annotations:map[string]string{io.kubernetes.container.hash: c48b9281,io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f888421df568f4e525ec67a880bdfb95c0d8b1dfbf7f8e18475e6c31aae3d7b6,PodSandboxId:3f895a24a6665a42aa2953ccb0f7cf3479d8059edead3756b1416ea51d2a1bc8,Metadata:&ContainerMetadata{Name:etcd-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628634143999398194,Labels:map[string]string{io.kubernetes.container.name: etcd-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-2jqws,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: 813a1198-4b83-4df1-99d2-f935cf883e6c,},Annotations:map[string]string{io.kubernetes.container.hash: f69cbd11,io.kubernetes.contai
ner.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f306c76739a5954a648a9696bcc3f3ac8284b0f337f5cc3aec8914a628ff6189,PodSandboxId:61b8c7125edd24c79359b13bd2a6f02c91cb7546aa1728ea44d2674174a75e33,Metadata:&ContainerMetadata{Name:private-image-eu,Attempt:0,},Image:&ImageSpec{Image:europe-west1-docker.pkg.dev/k8s-minikube/test-artifacts-eu/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,Annotations:map[string]string{},},ImageRef:europe-west1-docker.pkg.dev/k8s-minikube/test-artifacts-eu/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,State:CONTAINER_RUNNING,CreatedAt:1628634129281260783,Labels:map[string]string{io.kubernetes.container.name: private-image-eu,io.kubernetes.pod.name: private-image-eu-5956d58f9f-g6tz4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9114b208-93a1-4ef6
-a7cc-b77ef5bb5565,},Annotations:map[string]string{io.kubernetes.container.hash: 16bbf437,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f98d6cfb96e0ca72e24067278d400ac4346d88b7a8e52eaf653b154611724bd4,PodSandboxId:e00c07c230e012a777f69302a37229b9e32ebdc03cf3271e7b32b2ed82337084,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce,State:CONTAINER_RUNNING,CreatedAt:1628634119379939531,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 01307f34-9c2c-44f7-b67a-b6252d68db87,},Annotations
:map[string]string{io.kubernetes.container.hash: 10a02bdf,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f5c7c2dfb4580416a0d76c89267fb736011605c514deb56d6fbaf73021fe080,PodSandboxId:35c9c115dc99baf01feb7216e20e76c4392733fef3a585395adaec8a90078581,Metadata:&ContainerMetadata{Name:private-image,Attempt:0,},Image:&ImageSpec{Image:us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,Annotations:map[string]string{},},ImageRef:us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,State:CONTAINER_RUNNING,CreatedAt:1628634111930096127,Labels:map[string]string{io.kubernetes.container.name: private-image,io.kubernetes
.pod.name: private-image-7ff9c8c74f-9v7sz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f58dc798-47b6-4645-9f6d-71c0d033c8ef,},Annotations:map[string]string{io.kubernetes.container.hash: 98039a49,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36a9f0ee1e0346d9f3ea791a27d841a8b67f3275dab3d819a08634443369cbf3,PodSandboxId:86341f48b30f739a0560512dd84580346ba407628c9df19558d7f4b8a0d75043,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,State:CONTAINER_RUNNING,CreatedAt:1628634078427853242,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernete
s.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b283326c-6afe-4ed1-ba90-188232830913,},Annotations:map[string]string{io.kubernetes.container.hash: b7d8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:523764298f1c685006f822bd9ce10f658e322e1232adc49faa50201df6ea6865,PodSandboxId:3e9dd75cc73baf324d467c1c9f168893110edc1d847e1d261d75fa0799fa30b8,Metadata:&ContainerMetadata{Name:packageserver,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628634015975429512,Labels:map[string]string{io.kubernetes.container.name: packageserver,io.kubernetes.p
od.name: packageserver-677ff7d94-xkd8s,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 786954a1-e70a-4b95-a9e5-acbfe8a49bf6,},Annotations:map[string]string{io.kubernetes.container.hash: 779e9e50,io.kubernetes.container.ports: [{\"containerPort\":5443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71e14b7d4cabbdf121052666e72d7b14ba3508742e8e347e3c489e178c2a679e,PodSandboxId:000f05041462e2d1d2a89adfbf568feaef089a03b7cf1406a36897fe1f74289c,Metadata:&ContainerMetadata{Name:packageserver,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,
CreatedAt:1628634015342444550,Labels:map[string]string{io.kubernetes.container.name: packageserver,io.kubernetes.pod.name: packageserver-677ff7d94-6q8mg,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 26b191d6-0f8d-4e25-8b1a-7d28d0e7984e,},Annotations:map[string]string{io.kubernetes.container.hash: 46b0d32d,io.kubernetes.container.ports: [{\"containerPort\":5443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef8ad851b5c782cf15fa6804ddd2791480d23f0c33325d6be70549fbf99541c7,PodSandboxId:ae8f145fb7f1ef8442de7914c2437788451b9d3e68afcb77b76cbb28a4004868,Metadata:&ContainerMetadata{Name:registry-server,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0,Annotations:map[string]string
{},},ImageRef:quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0,State:CONTAINER_RUNNING,CreatedAt:1628634014040211315,Labels:map[string]string{io.kubernetes.container.name: registry-server,io.kubernetes.pod.name: operatorhubio-catalog-dxpzx,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 9eb587c8-1ac0-4c31-ad75-392aeeab016c,},Annotations:map[string]string{io.kubernetes.container.hash: 9fe3feb7,io.kubernetes.container.ports: [{\"name\":\"grpc\",\"containerPort\":50051,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4005ce090065992b3e905653d50c802d72e3dff527ad6cf506e7eaecb80289a,PodSandboxId:faa00a4c81c6456617b275494a12f743282abcf0620b565ee69a5bfdc67d1350,Metadata:&ContainerMetadata{Name:olm-operator,Attempt:0,},Image:&ImageSpe
c{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628633957781847037,Labels:map[string]string{io.kubernetes.container.name: olm-operator,io.kubernetes.pod.name: olm-operator-859c88c96-94cpj,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: f10aa085-7448-40e8-870c-2035b6e406c0,},Annotations:map[string]string{io.kubernetes.container.hash: be0818ed,io.kubernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":8081,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32466da554bdea568863cae13963c768b1d7ec
29efb8a52809cc292bf192c75d,PodSandboxId:fc8c458708b442a4437c1a4aa5b4603de8956d4bbe6fc24553d727f9c1671ec2,Metadata:&ContainerMetadata{Name:catalog-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628633956829278226,Labels:map[string]string{io.kubernetes.container.name: catalog-operator,io.kubernetes.pod.name: catalog-operator-75d496484d-96kck,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: b26ca429-82c9-4e29-a1e4-e199c1594830,},Annotations:map[string]string{io.kubernetes.container.hash: 5ac4aa45,io.kubernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":8081,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c70c907c6b18c522de143dfb6f9d30e92dbd0696f2950131bcb7a5fb87dc7892,PodSandboxId:cdcdd7a12338acb30cf842634651bd5dba3234021e172f56712e5f768ab96b91,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628633947638196372,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a21dae5f-beaf-40c2-a62b-1deec782e8ac,},Annotations:map[string]string{io.kubernetes.container.hash: 25e23c58,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc4a6a3d9877b8b4e8324753be3e37d6198a43b5da9190d1d153d96d71153724,PodSandboxId:e5298d8aa21e3c509ade50292d91f731b0b87ccc6cfbd38d56a8f76eb57fc835,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628633928832652493,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-fqn7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7785682-711b-4fb6-8eac-10af32eb4197,},Annotations:map[string]string{io.kubernetes.container.hash: e402732f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8748eec23730ca465804ba6be770747a0abe547a3d31565a6ba147713d0166,PodSandboxId:9020a823e792ed061153ee5aa33053dff1abb1d739dd16036e59f46494b3c1df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628633928060218350,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d4lb4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c768b33-9c26-4a9c-b300-dfba02
5b2acb,},Annotations:map[string]string{io.kubernetes.container.hash: 6f55461a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a35d5039eb3cdc9c794d9c6fdc2bbab2fd96a0114cbe2b8fb0e1607c9459dec7,PodSandboxId:d8a325deebbef6eaa230aef11ad9ab391ba2ba23a10cefd3b85f23b34ce1aaf9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628633905974204532,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-20210810221736-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0903f801e4a2f0a7fc8da83c33b35f6,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2063320d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc56e9e60cb3f185140a0c7db4fd6a69f7dc838ba9bbfcdd62163852fd28154e,PodSandboxId:39ae770b43e4fd258203caf2557ff10790f6bdd7348524ba3aa8f955248039ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628633905619734766,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-20210810221736-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0f0bddad13be9583fc305fb982a5bb7,},Annotat
ions:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5195118e3c0b1329e0b492eafe351023845e3a799604e1627bab1b359ef1863,PodSandboxId:8655f2907f64ca222e610464990875ec32ea1eee9eaf915b9209e8387c27af69,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628633905597544891,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-20210810221736-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bca43c269811ea9e4d198ffd73e4326,},Annotations:map
[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6da4cdeb9aaec2c6713d7e3a6af2b3ed2185dae82920ad94b6a45a6de953a51,PodSandboxId:2a29b22731dee06d8e37f4f4b3ac236c91f2e5a2269e7115025358f7a16bb60a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628633905284853880,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-20210810221736-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a80cb0ae9d45046376f291a8eb22ee79,},Annotations:map[string
]string{io.kubernetes.container.hash: cda108ff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=77e920bb-aa64-4a80-bae0-b8601448ba35 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:25:50 addons-20210810221736-30291 crio[2072]: time="2021-08-10 22:25:50.988336099Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5ed8aaaa-99b6-46b9-b484-47d882f37548 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:25:50 addons-20210810221736-30291 crio[2072]: time="2021-08-10 22:25:50.988482262Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5ed8aaaa-99b6-46b9-b484-47d882f37548 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:25:50 addons-20210810221736-30291 crio[2072]: time="2021-08-10 22:25:50.989110443Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b061b156acfdc296f16162882a4dd3fe7679683c7c4efe7d9534dc097eba7a3,PodSandboxId:3f895a24a6665a42aa2953ccb0f7cf3479d8059edead3756b1416ea51d2a1bc8,Metadata:&ContainerMetadata{Name:etcd-restore-operator,Attempt:0,},Image:&ImageSpec{Image:9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628634144788907572,Labels:map[string]string{io.kubernetes.container.name: etcd-restore-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-2jqws,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: 813a1198-4b83-4df1-99d2-f935cf883e6c,},Annotations:map[string]string{io.kubernetes.container.hash: 45a86a81,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:899fd22ea7edeb5db64a4c147a1c62a417b78fd6a18e926a30ab4ab51e4f44d9,PodSandboxId:3f895a24a6665a42aa2953ccb0f7cf3479d8059edead3756b1416ea51d2a1bc8,Metadata:&ContainerMetadata{Name:etcd-backup-operator,Attempt:0,},Image:&ImageSpec{Image:9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628634144377145686,Labels:map[string]string{io.kubernetes.container.name: etcd-backup-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-2jqws,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: 813a1198-4b83-4df1-99d2-f935cf883e6c,},Annotations:map[string]string{io.kubernetes.container.hash: c48b9281,io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f888421df568f4e525ec67a880bdfb95c0d8b1dfbf7f8e18475e6c31aae3d7b6,PodSandboxId:3f895a24a6665a42aa2953ccb0f7cf3479d8059edead3756b1416ea51d2a1bc8,Metadata:&ContainerMetadata{Name:etcd-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628634143999398194,Labels:map[string]string{io.kubernetes.container.name: etcd-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-2jqws,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: 813a1198-4b83-4df1-99d2-f935cf883e6c,},Annotations:map[string]string{io.kubernetes.container.hash: f69cbd11,io.kubernetes.contai
ner.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f306c76739a5954a648a9696bcc3f3ac8284b0f337f5cc3aec8914a628ff6189,PodSandboxId:61b8c7125edd24c79359b13bd2a6f02c91cb7546aa1728ea44d2674174a75e33,Metadata:&ContainerMetadata{Name:private-image-eu,Attempt:0,},Image:&ImageSpec{Image:europe-west1-docker.pkg.dev/k8s-minikube/test-artifacts-eu/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,Annotations:map[string]string{},},ImageRef:europe-west1-docker.pkg.dev/k8s-minikube/test-artifacts-eu/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,State:CONTAINER_RUNNING,CreatedAt:1628634129281260783,Labels:map[string]string{io.kubernetes.container.name: private-image-eu,io.kubernetes.pod.name: private-image-eu-5956d58f9f-g6tz4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9114b208-93a1-4ef6
-a7cc-b77ef5bb5565,},Annotations:map[string]string{io.kubernetes.container.hash: 16bbf437,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f98d6cfb96e0ca72e24067278d400ac4346d88b7a8e52eaf653b154611724bd4,PodSandboxId:e00c07c230e012a777f69302a37229b9e32ebdc03cf3271e7b32b2ed82337084,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce,State:CONTAINER_RUNNING,CreatedAt:1628634119379939531,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 01307f34-9c2c-44f7-b67a-b6252d68db87,},Annotations
:map[string]string{io.kubernetes.container.hash: 10a02bdf,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f5c7c2dfb4580416a0d76c89267fb736011605c514deb56d6fbaf73021fe080,PodSandboxId:35c9c115dc99baf01feb7216e20e76c4392733fef3a585395adaec8a90078581,Metadata:&ContainerMetadata{Name:private-image,Attempt:0,},Image:&ImageSpec{Image:us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,Annotations:map[string]string{},},ImageRef:us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,State:CONTAINER_RUNNING,CreatedAt:1628634111930096127,Labels:map[string]string{io.kubernetes.container.name: private-image,io.kubernetes
.pod.name: private-image-7ff9c8c74f-9v7sz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f58dc798-47b6-4645-9f6d-71c0d033c8ef,},Annotations:map[string]string{io.kubernetes.container.hash: 98039a49,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36a9f0ee1e0346d9f3ea791a27d841a8b67f3275dab3d819a08634443369cbf3,PodSandboxId:86341f48b30f739a0560512dd84580346ba407628c9df19558d7f4b8a0d75043,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,State:CONTAINER_RUNNING,CreatedAt:1628634078427853242,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernete
s.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b283326c-6afe-4ed1-ba90-188232830913,},Annotations:map[string]string{io.kubernetes.container.hash: b7d8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:523764298f1c685006f822bd9ce10f658e322e1232adc49faa50201df6ea6865,PodSandboxId:3e9dd75cc73baf324d467c1c9f168893110edc1d847e1d261d75fa0799fa30b8,Metadata:&ContainerMetadata{Name:packageserver,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628634015975429512,Labels:map[string]string{io.kubernetes.container.name: packageserver,io.kubernetes.p
od.name: packageserver-677ff7d94-xkd8s,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 786954a1-e70a-4b95-a9e5-acbfe8a49bf6,},Annotations:map[string]string{io.kubernetes.container.hash: 779e9e50,io.kubernetes.container.ports: [{\"containerPort\":5443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71e14b7d4cabbdf121052666e72d7b14ba3508742e8e347e3c489e178c2a679e,PodSandboxId:000f05041462e2d1d2a89adfbf568feaef089a03b7cf1406a36897fe1f74289c,Metadata:&ContainerMetadata{Name:packageserver,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,
CreatedAt:1628634015342444550,Labels:map[string]string{io.kubernetes.container.name: packageserver,io.kubernetes.pod.name: packageserver-677ff7d94-6q8mg,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 26b191d6-0f8d-4e25-8b1a-7d28d0e7984e,},Annotations:map[string]string{io.kubernetes.container.hash: 46b0d32d,io.kubernetes.container.ports: [{\"containerPort\":5443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef8ad851b5c782cf15fa6804ddd2791480d23f0c33325d6be70549fbf99541c7,PodSandboxId:ae8f145fb7f1ef8442de7914c2437788451b9d3e68afcb77b76cbb28a4004868,Metadata:&ContainerMetadata{Name:registry-server,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0,Annotations:map[string]string
{},},ImageRef:quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0,State:CONTAINER_RUNNING,CreatedAt:1628634014040211315,Labels:map[string]string{io.kubernetes.container.name: registry-server,io.kubernetes.pod.name: operatorhubio-catalog-dxpzx,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 9eb587c8-1ac0-4c31-ad75-392aeeab016c,},Annotations:map[string]string{io.kubernetes.container.hash: 9fe3feb7,io.kubernetes.container.ports: [{\"name\":\"grpc\",\"containerPort\":50051,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4005ce090065992b3e905653d50c802d72e3dff527ad6cf506e7eaecb80289a,PodSandboxId:faa00a4c81c6456617b275494a12f743282abcf0620b565ee69a5bfdc67d1350,Metadata:&ContainerMetadata{Name:olm-operator,Attempt:0,},Image:&ImageSpe
c{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628633957781847037,Labels:map[string]string{io.kubernetes.container.name: olm-operator,io.kubernetes.pod.name: olm-operator-859c88c96-94cpj,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: f10aa085-7448-40e8-870c-2035b6e406c0,},Annotations:map[string]string{io.kubernetes.container.hash: be0818ed,io.kubernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":8081,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32466da554bdea568863cae13963c768b1d7ec
29efb8a52809cc292bf192c75d,PodSandboxId:fc8c458708b442a4437c1a4aa5b4603de8956d4bbe6fc24553d727f9c1671ec2,Metadata:&ContainerMetadata{Name:catalog-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628633956829278226,Labels:map[string]string{io.kubernetes.container.name: catalog-operator,io.kubernetes.pod.name: catalog-operator-75d496484d-96kck,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: b26ca429-82c9-4e29-a1e4-e199c1594830,},Annotations:map[string]string{io.kubernetes.container.hash: 5ac4aa45,io.kubernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":8081,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c70c907c6b18c522de143dfb6f9d30e92dbd0696f2950131bcb7a5fb87dc7892,PodSandboxId:cdcdd7a12338acb30cf842634651bd5dba3234021e172f56712e5f768ab96b91,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628633947638196372,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a21dae5f-beaf-40c2-a62b-1deec782e8ac,},Annotations:map[string]string{io.kubernetes.container.hash: 25e23c58,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc4a6a3d9877b8b4e8324753be3e37d6198a43b5da9190d1d153d96d71153724,PodSandboxId:e5298d8aa21e3c509ade50292d91f731b0b87ccc6cfbd38d56a8f76eb57fc835,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628633928832652493,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-fqn7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7785682-711b-4fb6-8eac-10af32eb4197,},Annotations:map[string]string{io.kubernetes.container.hash: e402732f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8748eec23730ca465804ba6be770747a0abe547a3d31565a6ba147713d0166,PodSandboxId:9020a823e792ed061153ee5aa33053dff1abb1d739dd16036e59f46494b3c1df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628633928060218350,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d4lb4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c768b33-9c26-4a9c-b300-dfba02
5b2acb,},Annotations:map[string]string{io.kubernetes.container.hash: 6f55461a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a35d5039eb3cdc9c794d9c6fdc2bbab2fd96a0114cbe2b8fb0e1607c9459dec7,PodSandboxId:d8a325deebbef6eaa230aef11ad9ab391ba2ba23a10cefd3b85f23b34ce1aaf9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628633905974204532,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-20210810221736-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0903f801e4a2f0a7fc8da83c33b35f6,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2063320d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc56e9e60cb3f185140a0c7db4fd6a69f7dc838ba9bbfcdd62163852fd28154e,PodSandboxId:39ae770b43e4fd258203caf2557ff10790f6bdd7348524ba3aa8f955248039ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628633905619734766,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-20210810221736-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0f0bddad13be9583fc305fb982a5bb7,},Annotat
ions:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5195118e3c0b1329e0b492eafe351023845e3a799604e1627bab1b359ef1863,PodSandboxId:8655f2907f64ca222e610464990875ec32ea1eee9eaf915b9209e8387c27af69,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628633905597544891,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-20210810221736-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bca43c269811ea9e4d198ffd73e4326,},Annotations:map
[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6da4cdeb9aaec2c6713d7e3a6af2b3ed2185dae82920ad94b6a45a6de953a51,PodSandboxId:2a29b22731dee06d8e37f4f4b3ac236c91f2e5a2269e7115025358f7a16bb60a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628633905284853880,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-20210810221736-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a80cb0ae9d45046376f291a8eb22ee79,},Annotations:map[string
]string{io.kubernetes.container.hash: cda108ff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5ed8aaaa-99b6-46b9-b484-47d882f37548 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:25:51 addons-20210810221736-30291 crio[2072]: time="2021-08-10 22:25:51.048464674Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=cff85c58-de96-46bc-aa28-4fdab096ea5e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:25:51 addons-20210810221736-30291 crio[2072]: time="2021-08-10 22:25:51.048664730Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=cff85c58-de96-46bc-aa28-4fdab096ea5e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:25:51 addons-20210810221736-30291 crio[2072]: time="2021-08-10 22:25:51.049045676Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b061b156acfdc296f16162882a4dd3fe7679683c7c4efe7d9534dc097eba7a3,PodSandboxId:3f895a24a6665a42aa2953ccb0f7cf3479d8059edead3756b1416ea51d2a1bc8,Metadata:&ContainerMetadata{Name:etcd-restore-operator,Attempt:0,},Image:&ImageSpec{Image:9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628634144788907572,Labels:map[string]string{io.kubernetes.container.name: etcd-restore-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-2jqws,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: 813a1198-4b83-4df1-99d2-f935cf883e6c,},Annotations:map[string]string{io.kubernetes.container.hash: 45a86a81,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:899fd22ea7edeb5db64a4c147a1c62a417b78fd6a18e926a30ab4ab51e4f44d9,PodSandboxId:3f895a24a6665a42aa2953ccb0f7cf3479d8059edead3756b1416ea51d2a1bc8,Metadata:&ContainerMetadata{Name:etcd-backup-operator,Attempt:0,},Image:&ImageSpec{Image:9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628634144377145686,Labels:map[string]string{io.kubernetes.container.name: etcd-backup-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-2jqws,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: 813a1198-4b83-4df1-99d2-f935cf883e6c,},Annotations:map[string]string{io.kubernetes.container.hash: c48b9281,io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f888421df568f4e525ec67a880bdfb95c0d8b1dfbf7f8e18475e6c31aae3d7b6,PodSandboxId:3f895a24a6665a42aa2953ccb0f7cf3479d8059edead3756b1416ea51d2a1bc8,Metadata:&ContainerMetadata{Name:etcd-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628634143999398194,Labels:map[string]string{io.kubernetes.container.name: etcd-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-2jqws,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: 813a1198-4b83-4df1-99d2-f935cf883e6c,},Annotations:map[string]string{io.kubernetes.container.hash: f69cbd11,io.kubernetes.contai
ner.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f306c76739a5954a648a9696bcc3f3ac8284b0f337f5cc3aec8914a628ff6189,PodSandboxId:61b8c7125edd24c79359b13bd2a6f02c91cb7546aa1728ea44d2674174a75e33,Metadata:&ContainerMetadata{Name:private-image-eu,Attempt:0,},Image:&ImageSpec{Image:europe-west1-docker.pkg.dev/k8s-minikube/test-artifacts-eu/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,Annotations:map[string]string{},},ImageRef:europe-west1-docker.pkg.dev/k8s-minikube/test-artifacts-eu/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,State:CONTAINER_RUNNING,CreatedAt:1628634129281260783,Labels:map[string]string{io.kubernetes.container.name: private-image-eu,io.kubernetes.pod.name: private-image-eu-5956d58f9f-g6tz4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9114b208-93a1-4ef6
-a7cc-b77ef5bb5565,},Annotations:map[string]string{io.kubernetes.container.hash: 16bbf437,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f98d6cfb96e0ca72e24067278d400ac4346d88b7a8e52eaf653b154611724bd4,PodSandboxId:e00c07c230e012a777f69302a37229b9e32ebdc03cf3271e7b32b2ed82337084,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce,State:CONTAINER_RUNNING,CreatedAt:1628634119379939531,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 01307f34-9c2c-44f7-b67a-b6252d68db87,},Annotations
:map[string]string{io.kubernetes.container.hash: 10a02bdf,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f5c7c2dfb4580416a0d76c89267fb736011605c514deb56d6fbaf73021fe080,PodSandboxId:35c9c115dc99baf01feb7216e20e76c4392733fef3a585395adaec8a90078581,Metadata:&ContainerMetadata{Name:private-image,Attempt:0,},Image:&ImageSpec{Image:us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,Annotations:map[string]string{},},ImageRef:us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,State:CONTAINER_RUNNING,CreatedAt:1628634111930096127,Labels:map[string]string{io.kubernetes.container.name: private-image,io.kubernetes
.pod.name: private-image-7ff9c8c74f-9v7sz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f58dc798-47b6-4645-9f6d-71c0d033c8ef,},Annotations:map[string]string{io.kubernetes.container.hash: 98039a49,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36a9f0ee1e0346d9f3ea791a27d841a8b67f3275dab3d819a08634443369cbf3,PodSandboxId:86341f48b30f739a0560512dd84580346ba407628c9df19558d7f4b8a0d75043,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,State:CONTAINER_RUNNING,CreatedAt:1628634078427853242,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernete
s.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b283326c-6afe-4ed1-ba90-188232830913,},Annotations:map[string]string{io.kubernetes.container.hash: b7d8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:523764298f1c685006f822bd9ce10f658e322e1232adc49faa50201df6ea6865,PodSandboxId:3e9dd75cc73baf324d467c1c9f168893110edc1d847e1d261d75fa0799fa30b8,Metadata:&ContainerMetadata{Name:packageserver,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628634015975429512,Labels:map[string]string{io.kubernetes.container.name: packageserver,io.kubernetes.p
od.name: packageserver-677ff7d94-xkd8s,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 786954a1-e70a-4b95-a9e5-acbfe8a49bf6,},Annotations:map[string]string{io.kubernetes.container.hash: 779e9e50,io.kubernetes.container.ports: [{\"containerPort\":5443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71e14b7d4cabbdf121052666e72d7b14ba3508742e8e347e3c489e178c2a679e,PodSandboxId:000f05041462e2d1d2a89adfbf568feaef089a03b7cf1406a36897fe1f74289c,Metadata:&ContainerMetadata{Name:packageserver,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,
CreatedAt:1628634015342444550,Labels:map[string]string{io.kubernetes.container.name: packageserver,io.kubernetes.pod.name: packageserver-677ff7d94-6q8mg,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 26b191d6-0f8d-4e25-8b1a-7d28d0e7984e,},Annotations:map[string]string{io.kubernetes.container.hash: 46b0d32d,io.kubernetes.container.ports: [{\"containerPort\":5443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef8ad851b5c782cf15fa6804ddd2791480d23f0c33325d6be70549fbf99541c7,PodSandboxId:ae8f145fb7f1ef8442de7914c2437788451b9d3e68afcb77b76cbb28a4004868,Metadata:&ContainerMetadata{Name:registry-server,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0,Annotations:map[string]string
{},},ImageRef:quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0,State:CONTAINER_RUNNING,CreatedAt:1628634014040211315,Labels:map[string]string{io.kubernetes.container.name: registry-server,io.kubernetes.pod.name: operatorhubio-catalog-dxpzx,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 9eb587c8-1ac0-4c31-ad75-392aeeab016c,},Annotations:map[string]string{io.kubernetes.container.hash: 9fe3feb7,io.kubernetes.container.ports: [{\"name\":\"grpc\",\"containerPort\":50051,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4005ce090065992b3e905653d50c802d72e3dff527ad6cf506e7eaecb80289a,PodSandboxId:faa00a4c81c6456617b275494a12f743282abcf0620b565ee69a5bfdc67d1350,Metadata:&ContainerMetadata{Name:olm-operator,Attempt:0,},Image:&ImageSpe
c{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628633957781847037,Labels:map[string]string{io.kubernetes.container.name: olm-operator,io.kubernetes.pod.name: olm-operator-859c88c96-94cpj,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: f10aa085-7448-40e8-870c-2035b6e406c0,},Annotations:map[string]string{io.kubernetes.container.hash: be0818ed,io.kubernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":8081,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32466da554bdea568863cae13963c768b1d7ec
29efb8a52809cc292bf192c75d,PodSandboxId:fc8c458708b442a4437c1a4aa5b4603de8956d4bbe6fc24553d727f9c1671ec2,Metadata:&ContainerMetadata{Name:catalog-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628633956829278226,Labels:map[string]string{io.kubernetes.container.name: catalog-operator,io.kubernetes.pod.name: catalog-operator-75d496484d-96kck,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: b26ca429-82c9-4e29-a1e4-e199c1594830,},Annotations:map[string]string{io.kubernetes.container.hash: 5ac4aa45,io.kubernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":8081,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c70c907c6b18c522de143dfb6f9d30e92dbd0696f2950131bcb7a5fb87dc7892,PodSandboxId:cdcdd7a12338acb30cf842634651bd5dba3234021e172f56712e5f768ab96b91,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628633947638196372,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a21dae5f-beaf-40c2-a62b-1deec782e8ac,},Annotations:map[string]string{io.kubernetes.container.hash: 25e23c58,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc4a6a3d9877b8b4e8324753be3e37d6198a43b5da9190d1d153d96d71153724,PodSandboxId:e5298d8aa21e3c509ade50292d91f731b0b87ccc6cfbd38d56a8f76eb57fc835,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628633928832652493,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-fqn7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7785682-711b-4fb6-8eac-10af32eb4197,},Annotations:map[string]string{io.kubernetes.container.hash: e402732f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8748eec23730ca465804ba6be770747a0abe547a3d31565a6ba147713d0166,PodSandboxId:9020a823e792ed061153ee5aa33053dff1abb1d739dd16036e59f46494b3c1df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628633928060218350,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d4lb4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c768b33-9c26-4a9c-b300-dfba02
5b2acb,},Annotations:map[string]string{io.kubernetes.container.hash: 6f55461a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a35d5039eb3cdc9c794d9c6fdc2bbab2fd96a0114cbe2b8fb0e1607c9459dec7,PodSandboxId:d8a325deebbef6eaa230aef11ad9ab391ba2ba23a10cefd3b85f23b34ce1aaf9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628633905974204532,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-20210810221736-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0903f801e4a2f0a7fc8da83c33b35f6,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2063320d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc56e9e60cb3f185140a0c7db4fd6a69f7dc838ba9bbfcdd62163852fd28154e,PodSandboxId:39ae770b43e4fd258203caf2557ff10790f6bdd7348524ba3aa8f955248039ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628633905619734766,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-20210810221736-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0f0bddad13be9583fc305fb982a5bb7,},Annotat
ions:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5195118e3c0b1329e0b492eafe351023845e3a799604e1627bab1b359ef1863,PodSandboxId:8655f2907f64ca222e610464990875ec32ea1eee9eaf915b9209e8387c27af69,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628633905597544891,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-20210810221736-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bca43c269811ea9e4d198ffd73e4326,},Annotations:map
[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6da4cdeb9aaec2c6713d7e3a6af2b3ed2185dae82920ad94b6a45a6de953a51,PodSandboxId:2a29b22731dee06d8e37f4f4b3ac236c91f2e5a2269e7115025358f7a16bb60a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628633905284853880,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-20210810221736-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a80cb0ae9d45046376f291a8eb22ee79,},Annotations:map[string
]string{io.kubernetes.container.hash: cda108ff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=cff85c58-de96-46bc-aa28-4fdab096ea5e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:25:51 addons-20210810221736-30291 crio[2072]: time="2021-08-10 22:25:51.086286905Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=57ab0950-033a-4ccc-8119-5ecb2d5e8a3b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:25:51 addons-20210810221736-30291 crio[2072]: time="2021-08-10 22:25:51.086500117Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=57ab0950-033a-4ccc-8119-5ecb2d5e8a3b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:25:51 addons-20210810221736-30291 crio[2072]: time="2021-08-10 22:25:51.087034750Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b061b156acfdc296f16162882a4dd3fe7679683c7c4efe7d9534dc097eba7a3,PodSandboxId:3f895a24a6665a42aa2953ccb0f7cf3479d8059edead3756b1416ea51d2a1bc8,Metadata:&ContainerMetadata{Name:etcd-restore-operator,Attempt:0,},Image:&ImageSpec{Image:9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628634144788907572,Labels:map[string]string{io.kubernetes.container.name: etcd-restore-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-2jqws,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: 813a1198-4b83-4df1-99d2-f935cf883e6c,},Annotations:map[string]string{io.kubernetes.container.hash: 45a86a81,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:899fd22ea7edeb5db64a4c147a1c62a417b78fd6a18e926a30ab4ab51e4f44d9,PodSandboxId:3f895a24a6665a42aa2953ccb0f7cf3479d8059edead3756b1416ea51d2a1bc8,Metadata:&ContainerMetadata{Name:etcd-backup-operator,Attempt:0,},Image:&ImageSpec{Image:9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628634144377145686,Labels:map[string]string{io.kubernetes.container.name: etcd-backup-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-2jqws,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: 813a1198-4b83-4df1-99d2-f935cf883e6c,},Annotations:map[string]string{io.kubernetes.container.hash: c48b9281,io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f888421df568f4e525ec67a880bdfb95c0d8b1dfbf7f8e18475e6c31aae3d7b6,PodSandboxId:3f895a24a6665a42aa2953ccb0f7cf3479d8059edead3756b1416ea51d2a1bc8,Metadata:&ContainerMetadata{Name:etcd-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628634143999398194,Labels:map[string]string{io.kubernetes.container.name: etcd-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-2jqws,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: 813a1198-4b83-4df1-99d2-f935cf883e6c,},Annotations:map[string]string{io.kubernetes.container.hash: f69cbd11,io.kubernetes.contai
ner.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f306c76739a5954a648a9696bcc3f3ac8284b0f337f5cc3aec8914a628ff6189,PodSandboxId:61b8c7125edd24c79359b13bd2a6f02c91cb7546aa1728ea44d2674174a75e33,Metadata:&ContainerMetadata{Name:private-image-eu,Attempt:0,},Image:&ImageSpec{Image:europe-west1-docker.pkg.dev/k8s-minikube/test-artifacts-eu/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,Annotations:map[string]string{},},ImageRef:europe-west1-docker.pkg.dev/k8s-minikube/test-artifacts-eu/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,State:CONTAINER_RUNNING,CreatedAt:1628634129281260783,Labels:map[string]string{io.kubernetes.container.name: private-image-eu,io.kubernetes.pod.name: private-image-eu-5956d58f9f-g6tz4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9114b208-93a1-4ef6
-a7cc-b77ef5bb5565,},Annotations:map[string]string{io.kubernetes.container.hash: 16bbf437,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f98d6cfb96e0ca72e24067278d400ac4346d88b7a8e52eaf653b154611724bd4,PodSandboxId:e00c07c230e012a777f69302a37229b9e32ebdc03cf3271e7b32b2ed82337084,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce,State:CONTAINER_RUNNING,CreatedAt:1628634119379939531,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 01307f34-9c2c-44f7-b67a-b6252d68db87,},Annotations
:map[string]string{io.kubernetes.container.hash: 10a02bdf,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f5c7c2dfb4580416a0d76c89267fb736011605c514deb56d6fbaf73021fe080,PodSandboxId:35c9c115dc99baf01feb7216e20e76c4392733fef3a585395adaec8a90078581,Metadata:&ContainerMetadata{Name:private-image,Attempt:0,},Image:&ImageSpec{Image:us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,Annotations:map[string]string{},},ImageRef:us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,State:CONTAINER_RUNNING,CreatedAt:1628634111930096127,Labels:map[string]string{io.kubernetes.container.name: private-image,io.kubernetes
.pod.name: private-image-7ff9c8c74f-9v7sz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f58dc798-47b6-4645-9f6d-71c0d033c8ef,},Annotations:map[string]string{io.kubernetes.container.hash: 98039a49,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36a9f0ee1e0346d9f3ea791a27d841a8b67f3275dab3d819a08634443369cbf3,PodSandboxId:86341f48b30f739a0560512dd84580346ba407628c9df19558d7f4b8a0d75043,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,State:CONTAINER_RUNNING,CreatedAt:1628634078427853242,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernete
s.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b283326c-6afe-4ed1-ba90-188232830913,},Annotations:map[string]string{io.kubernetes.container.hash: b7d8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:523764298f1c685006f822bd9ce10f658e322e1232adc49faa50201df6ea6865,PodSandboxId:3e9dd75cc73baf324d467c1c9f168893110edc1d847e1d261d75fa0799fa30b8,Metadata:&ContainerMetadata{Name:packageserver,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628634015975429512,Labels:map[string]string{io.kubernetes.container.name: packageserver,io.kubernetes.p
od.name: packageserver-677ff7d94-xkd8s,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 786954a1-e70a-4b95-a9e5-acbfe8a49bf6,},Annotations:map[string]string{io.kubernetes.container.hash: 779e9e50,io.kubernetes.container.ports: [{\"containerPort\":5443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71e14b7d4cabbdf121052666e72d7b14ba3508742e8e347e3c489e178c2a679e,PodSandboxId:000f05041462e2d1d2a89adfbf568feaef089a03b7cf1406a36897fe1f74289c,Metadata:&ContainerMetadata{Name:packageserver,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,
CreatedAt:1628634015342444550,Labels:map[string]string{io.kubernetes.container.name: packageserver,io.kubernetes.pod.name: packageserver-677ff7d94-6q8mg,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 26b191d6-0f8d-4e25-8b1a-7d28d0e7984e,},Annotations:map[string]string{io.kubernetes.container.hash: 46b0d32d,io.kubernetes.container.ports: [{\"containerPort\":5443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef8ad851b5c782cf15fa6804ddd2791480d23f0c33325d6be70549fbf99541c7,PodSandboxId:ae8f145fb7f1ef8442de7914c2437788451b9d3e68afcb77b76cbb28a4004868,Metadata:&ContainerMetadata{Name:registry-server,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0,Annotations:map[string]string
{},},ImageRef:quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0,State:CONTAINER_RUNNING,CreatedAt:1628634014040211315,Labels:map[string]string{io.kubernetes.container.name: registry-server,io.kubernetes.pod.name: operatorhubio-catalog-dxpzx,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 9eb587c8-1ac0-4c31-ad75-392aeeab016c,},Annotations:map[string]string{io.kubernetes.container.hash: 9fe3feb7,io.kubernetes.container.ports: [{\"name\":\"grpc\",\"containerPort\":50051,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4005ce090065992b3e905653d50c802d72e3dff527ad6cf506e7eaecb80289a,PodSandboxId:faa00a4c81c6456617b275494a12f743282abcf0620b565ee69a5bfdc67d1350,Metadata:&ContainerMetadata{Name:olm-operator,Attempt:0,},Image:&ImageSpe
c{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628633957781847037,Labels:map[string]string{io.kubernetes.container.name: olm-operator,io.kubernetes.pod.name: olm-operator-859c88c96-94cpj,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: f10aa085-7448-40e8-870c-2035b6e406c0,},Annotations:map[string]string{io.kubernetes.container.hash: be0818ed,io.kubernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":8081,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32466da554bdea568863cae13963c768b1d7ec
29efb8a52809cc292bf192c75d,PodSandboxId:fc8c458708b442a4437c1a4aa5b4603de8956d4bbe6fc24553d727f9c1671ec2,Metadata:&ContainerMetadata{Name:catalog-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628633956829278226,Labels:map[string]string{io.kubernetes.container.name: catalog-operator,io.kubernetes.pod.name: catalog-operator-75d496484d-96kck,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: b26ca429-82c9-4e29-a1e4-e199c1594830,},Annotations:map[string]string{io.kubernetes.container.hash: 5ac4aa45,io.kubernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":8081,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c70c907c6b18c522de143dfb6f9d30e92dbd0696f2950131bcb7a5fb87dc7892,PodSandboxId:cdcdd7a12338acb30cf842634651bd5dba3234021e172f56712e5f768ab96b91,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628633947638196372,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a21dae5f-beaf-40c2-a62b-1deec782e8ac,},Annotations:map[string]string{io.kubernetes.container.hash: 25e23c58,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc4a6a3d9877b8b4e8324753be3e37d6198a43b5da9190d1d153d96d71153724,PodSandboxId:e5298d8aa21e3c509ade50292d91f731b0b87ccc6cfbd38d56a8f76eb57fc835,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628633928832652493,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-fqn7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7785682-711b-4fb6-8eac-10af32eb4197,},Annotations:map[string]string{io.kubernetes.container.hash: e402732f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8748eec23730ca465804ba6be770747a0abe547a3d31565a6ba147713d0166,PodSandboxId:9020a823e792ed061153ee5aa33053dff1abb1d739dd16036e59f46494b3c1df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628633928060218350,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d4lb4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c768b33-9c26-4a9c-b300-dfba02
5b2acb,},Annotations:map[string]string{io.kubernetes.container.hash: 6f55461a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a35d5039eb3cdc9c794d9c6fdc2bbab2fd96a0114cbe2b8fb0e1607c9459dec7,PodSandboxId:d8a325deebbef6eaa230aef11ad9ab391ba2ba23a10cefd3b85f23b34ce1aaf9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628633905974204532,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-20210810221736-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0903f801e4a2f0a7fc8da83c33b35f6,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2063320d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc56e9e60cb3f185140a0c7db4fd6a69f7dc838ba9bbfcdd62163852fd28154e,PodSandboxId:39ae770b43e4fd258203caf2557ff10790f6bdd7348524ba3aa8f955248039ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628633905619734766,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-20210810221736-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0f0bddad13be9583fc305fb982a5bb7,},Annotat
ions:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5195118e3c0b1329e0b492eafe351023845e3a799604e1627bab1b359ef1863,PodSandboxId:8655f2907f64ca222e610464990875ec32ea1eee9eaf915b9209e8387c27af69,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628633905597544891,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-20210810221736-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bca43c269811ea9e4d198ffd73e4326,},Annotations:map
[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6da4cdeb9aaec2c6713d7e3a6af2b3ed2185dae82920ad94b6a45a6de953a51,PodSandboxId:2a29b22731dee06d8e37f4f4b3ac236c91f2e5a2269e7115025358f7a16bb60a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628633905284853880,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-20210810221736-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a80cb0ae9d45046376f291a8eb22ee79,},Annotations:map[string
]string{io.kubernetes.container.hash: cda108ff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=57ab0950-033a-4ccc-8119-5ecb2d5e8a3b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:25:51 addons-20210810221736-30291 crio[2072]: time="2021-08-10 22:25:51.125361666Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=91b6d86e-529f-45ea-8249-083ee3856f31 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:25:51 addons-20210810221736-30291 crio[2072]: time="2021-08-10 22:25:51.125426413Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=91b6d86e-529f-45ea-8249-083ee3856f31 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:25:51 addons-20210810221736-30291 crio[2072]: time="2021-08-10 22:25:51.126138583Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b061b156acfdc296f16162882a4dd3fe7679683c7c4efe7d9534dc097eba7a3,PodSandboxId:3f895a24a6665a42aa2953ccb0f7cf3479d8059edead3756b1416ea51d2a1bc8,Metadata:&ContainerMetadata{Name:etcd-restore-operator,Attempt:0,},Image:&ImageSpec{Image:9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628634144788907572,Labels:map[string]string{io.kubernetes.container.name: etcd-restore-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-2jqws,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: 813a1198-4b83-4df1-99d2-f935cf883e6c,},Annotations:map[string]string{io.kubernetes.container.hash: 45a86a81,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:899fd22ea7edeb5db64a4c147a1c62a417b78fd6a18e926a30ab4ab51e4f44d9,PodSandboxId:3f895a24a6665a42aa2953ccb0f7cf3479d8059edead3756b1416ea51d2a1bc8,Metadata:&ContainerMetadata{Name:etcd-backup-operator,Attempt:0,},Image:&ImageSpec{Image:9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628634144377145686,Labels:map[string]string{io.kubernetes.container.name: etcd-backup-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-2jqws,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: 813a1198-4b83-4df1-99d2-f935cf883e6c,},Annotations:map[string]string{io.kubernetes.container.hash: c48b9281,io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f888421df568f4e525ec67a880bdfb95c0d8b1dfbf7f8e18475e6c31aae3d7b6,PodSandboxId:3f895a24a6665a42aa2953ccb0f7cf3479d8059edead3756b1416ea51d2a1bc8,Metadata:&ContainerMetadata{Name:etcd-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628634143999398194,Labels:map[string]string{io.kubernetes.container.name: etcd-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-2jqws,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: 813a1198-4b83-4df1-99d2-f935cf883e6c,},Annotations:map[string]string{io.kubernetes.container.hash: f69cbd11,io.kubernetes.contai
ner.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f306c76739a5954a648a9696bcc3f3ac8284b0f337f5cc3aec8914a628ff6189,PodSandboxId:61b8c7125edd24c79359b13bd2a6f02c91cb7546aa1728ea44d2674174a75e33,Metadata:&ContainerMetadata{Name:private-image-eu,Attempt:0,},Image:&ImageSpec{Image:europe-west1-docker.pkg.dev/k8s-minikube/test-artifacts-eu/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,Annotations:map[string]string{},},ImageRef:europe-west1-docker.pkg.dev/k8s-minikube/test-artifacts-eu/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,State:CONTAINER_RUNNING,CreatedAt:1628634129281260783,Labels:map[string]string{io.kubernetes.container.name: private-image-eu,io.kubernetes.pod.name: private-image-eu-5956d58f9f-g6tz4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9114b208-93a1-4ef6
-a7cc-b77ef5bb5565,},Annotations:map[string]string{io.kubernetes.container.hash: 16bbf437,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f98d6cfb96e0ca72e24067278d400ac4346d88b7a8e52eaf653b154611724bd4,PodSandboxId:e00c07c230e012a777f69302a37229b9e32ebdc03cf3271e7b32b2ed82337084,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce,State:CONTAINER_RUNNING,CreatedAt:1628634119379939531,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 01307f34-9c2c-44f7-b67a-b6252d68db87,},Annotations
:map[string]string{io.kubernetes.container.hash: 10a02bdf,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f5c7c2dfb4580416a0d76c89267fb736011605c514deb56d6fbaf73021fe080,PodSandboxId:35c9c115dc99baf01feb7216e20e76c4392733fef3a585395adaec8a90078581,Metadata:&ContainerMetadata{Name:private-image,Attempt:0,},Image:&ImageSpec{Image:us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,Annotations:map[string]string{},},ImageRef:us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,State:CONTAINER_RUNNING,CreatedAt:1628634111930096127,Labels:map[string]string{io.kubernetes.container.name: private-image,io.kubernetes
.pod.name: private-image-7ff9c8c74f-9v7sz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f58dc798-47b6-4645-9f6d-71c0d033c8ef,},Annotations:map[string]string{io.kubernetes.container.hash: 98039a49,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36a9f0ee1e0346d9f3ea791a27d841a8b67f3275dab3d819a08634443369cbf3,PodSandboxId:86341f48b30f739a0560512dd84580346ba407628c9df19558d7f4b8a0d75043,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,State:CONTAINER_RUNNING,CreatedAt:1628634078427853242,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernete
s.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b283326c-6afe-4ed1-ba90-188232830913,},Annotations:map[string]string{io.kubernetes.container.hash: b7d8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:523764298f1c685006f822bd9ce10f658e322e1232adc49faa50201df6ea6865,PodSandboxId:3e9dd75cc73baf324d467c1c9f168893110edc1d847e1d261d75fa0799fa30b8,Metadata:&ContainerMetadata{Name:packageserver,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628634015975429512,Labels:map[string]string{io.kubernetes.container.name: packageserver,io.kubernetes.p
od.name: packageserver-677ff7d94-xkd8s,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 786954a1-e70a-4b95-a9e5-acbfe8a49bf6,},Annotations:map[string]string{io.kubernetes.container.hash: 779e9e50,io.kubernetes.container.ports: [{\"containerPort\":5443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71e14b7d4cabbdf121052666e72d7b14ba3508742e8e347e3c489e178c2a679e,PodSandboxId:000f05041462e2d1d2a89adfbf568feaef089a03b7cf1406a36897fe1f74289c,Metadata:&ContainerMetadata{Name:packageserver,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,
CreatedAt:1628634015342444550,Labels:map[string]string{io.kubernetes.container.name: packageserver,io.kubernetes.pod.name: packageserver-677ff7d94-6q8mg,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 26b191d6-0f8d-4e25-8b1a-7d28d0e7984e,},Annotations:map[string]string{io.kubernetes.container.hash: 46b0d32d,io.kubernetes.container.ports: [{\"containerPort\":5443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef8ad851b5c782cf15fa6804ddd2791480d23f0c33325d6be70549fbf99541c7,PodSandboxId:ae8f145fb7f1ef8442de7914c2437788451b9d3e68afcb77b76cbb28a4004868,Metadata:&ContainerMetadata{Name:registry-server,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0,Annotations:map[string]string
{},},ImageRef:quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0,State:CONTAINER_RUNNING,CreatedAt:1628634014040211315,Labels:map[string]string{io.kubernetes.container.name: registry-server,io.kubernetes.pod.name: operatorhubio-catalog-dxpzx,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 9eb587c8-1ac0-4c31-ad75-392aeeab016c,},Annotations:map[string]string{io.kubernetes.container.hash: 9fe3feb7,io.kubernetes.container.ports: [{\"name\":\"grpc\",\"containerPort\":50051,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4005ce090065992b3e905653d50c802d72e3dff527ad6cf506e7eaecb80289a,PodSandboxId:faa00a4c81c6456617b275494a12f743282abcf0620b565ee69a5bfdc67d1350,Metadata:&ContainerMetadata{Name:olm-operator,Attempt:0,},Image:&ImageSpe
c{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628633957781847037,Labels:map[string]string{io.kubernetes.container.name: olm-operator,io.kubernetes.pod.name: olm-operator-859c88c96-94cpj,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: f10aa085-7448-40e8-870c-2035b6e406c0,},Annotations:map[string]string{io.kubernetes.container.hash: be0818ed,io.kubernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":8081,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32466da554bdea568863cae13963c768b1d7ec
29efb8a52809cc292bf192c75d,PodSandboxId:fc8c458708b442a4437c1a4aa5b4603de8956d4bbe6fc24553d727f9c1671ec2,Metadata:&ContainerMetadata{Name:catalog-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628633956829278226,Labels:map[string]string{io.kubernetes.container.name: catalog-operator,io.kubernetes.pod.name: catalog-operator-75d496484d-96kck,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: b26ca429-82c9-4e29-a1e4-e199c1594830,},Annotations:map[string]string{io.kubernetes.container.hash: 5ac4aa45,io.kubernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":8081,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c70c907c6b18c522de143dfb6f9d30e92dbd0696f2950131bcb7a5fb87dc7892,PodSandboxId:cdcdd7a12338acb30cf842634651bd5dba3234021e172f56712e5f768ab96b91,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628633947638196372,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a21dae5f-beaf-40c2-a62b-1deec782e8ac,},Annotations:map[string]string{io.kubernetes.container.hash: 25e23c58,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc4a6a3d9877b8b4e8324753be3e37d6198a43b5da9190d1d153d96d71153724,PodSandboxId:e5298d8aa21e3c509ade50292d91f731b0b87ccc6cfbd38d56a8f76eb57fc835,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628633928832652493,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-fqn7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7785682-711b-4fb6-8eac-10af32eb4197,},Annotations:map[string]string{io.kubernetes.container.hash: e402732f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8748eec23730ca465804ba6be770747a0abe547a3d31565a6ba147713d0166,PodSandboxId:9020a823e792ed061153ee5aa33053dff1abb1d739dd16036e59f46494b3c1df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628633928060218350,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d4lb4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c768b33-9c26-4a9c-b300-dfba02
5b2acb,},Annotations:map[string]string{io.kubernetes.container.hash: 6f55461a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a35d5039eb3cdc9c794d9c6fdc2bbab2fd96a0114cbe2b8fb0e1607c9459dec7,PodSandboxId:d8a325deebbef6eaa230aef11ad9ab391ba2ba23a10cefd3b85f23b34ce1aaf9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628633905974204532,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-20210810221736-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0903f801e4a2f0a7fc8da83c33b35f6,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2063320d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc56e9e60cb3f185140a0c7db4fd6a69f7dc838ba9bbfcdd62163852fd28154e,PodSandboxId:39ae770b43e4fd258203caf2557ff10790f6bdd7348524ba3aa8f955248039ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628633905619734766,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-20210810221736-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0f0bddad13be9583fc305fb982a5bb7,},Annotat
ions:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5195118e3c0b1329e0b492eafe351023845e3a799604e1627bab1b359ef1863,PodSandboxId:8655f2907f64ca222e610464990875ec32ea1eee9eaf915b9209e8387c27af69,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628633905597544891,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-20210810221736-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bca43c269811ea9e4d198ffd73e4326,},Annotations:map
[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6da4cdeb9aaec2c6713d7e3a6af2b3ed2185dae82920ad94b6a45a6de953a51,PodSandboxId:2a29b22731dee06d8e37f4f4b3ac236c91f2e5a2269e7115025358f7a16bb60a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628633905284853880,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-20210810221736-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a80cb0ae9d45046376f291a8eb22ee79,},Annotations:map[string
]string{io.kubernetes.container.hash: cda108ff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=91b6d86e-529f-45ea-8249-083ee3856f31 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:25:51 addons-20210810221736-30291 crio[2072]: time="2021-08-10 22:25:51.143430178Z" level=debug msg="Request: &VersionRequest{Version:0.1.0,}" file="go-grpc-middleware/chain.go:25" id=7ce6afe2-78ef-4454-a859-0ad6baa9517c name=/runtime.v1alpha2.RuntimeService/Version
	Aug 10 22:25:51 addons-20210810221736-30291 crio[2072]: time="2021-08-10 22:25:51.143520518Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.20.2,RuntimeApiVersion:v1alpha1,}" file="go-grpc-middleware/chain.go:25" id=7ce6afe2-78ef-4454-a859-0ad6baa9517c name=/runtime.v1alpha2.RuntimeService/Version
	Aug 10 22:25:51 addons-20210810221736-30291 crio[2072]: time="2021-08-10 22:25:51.167485582Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7db0bab0-f163-4d07-824c-3a38939f229a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:25:51 addons-20210810221736-30291 crio[2072]: time="2021-08-10 22:25:51.167545181Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7db0bab0-f163-4d07-824c-3a38939f229a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:25:51 addons-20210810221736-30291 crio[2072]: time="2021-08-10 22:25:51.167991341Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b061b156acfdc296f16162882a4dd3fe7679683c7c4efe7d9534dc097eba7a3,PodSandboxId:3f895a24a6665a42aa2953ccb0f7cf3479d8059edead3756b1416ea51d2a1bc8,Metadata:&ContainerMetadata{Name:etcd-restore-operator,Attempt:0,},Image:&ImageSpec{Image:9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628634144788907572,Labels:map[string]string{io.kubernetes.container.name: etcd-restore-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-2jqws,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: 813a1198-4b83-4df1-99d2-f935cf883e6c,},Annotations:map[string]string{io.kubernetes.container.hash: 45a86a81,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:899fd22ea7edeb5db64a4c147a1c62a417b78fd6a18e926a30ab4ab51e4f44d9,PodSandboxId:3f895a24a6665a42aa2953ccb0f7cf3479d8059edead3756b1416ea51d2a1bc8,Metadata:&ContainerMetadata{Name:etcd-backup-operator,Attempt:0,},Image:&ImageSpec{Image:9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628634144377145686,Labels:map[string]string{io.kubernetes.container.name: etcd-backup-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-2jqws,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: 813a1198-4b83-4df1-99d2-f935cf883e6c,},Annotations:map[string]string{io.kubernetes.container.hash: c48b9281,io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f888421df568f4e525ec67a880bdfb95c0d8b1dfbf7f8e18475e6c31aae3d7b6,PodSandboxId:3f895a24a6665a42aa2953ccb0f7cf3479d8059edead3756b1416ea51d2a1bc8,Metadata:&ContainerMetadata{Name:etcd-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628634143999398194,Labels:map[string]string{io.kubernetes.container.name: etcd-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-2jqws,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: 813a1198-4b83-4df1-99d2-f935cf883e6c,},Annotations:map[string]string{io.kubernetes.container.hash: f69cbd11,io.kubernetes.contai
ner.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f306c76739a5954a648a9696bcc3f3ac8284b0f337f5cc3aec8914a628ff6189,PodSandboxId:61b8c7125edd24c79359b13bd2a6f02c91cb7546aa1728ea44d2674174a75e33,Metadata:&ContainerMetadata{Name:private-image-eu,Attempt:0,},Image:&ImageSpec{Image:europe-west1-docker.pkg.dev/k8s-minikube/test-artifacts-eu/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,Annotations:map[string]string{},},ImageRef:europe-west1-docker.pkg.dev/k8s-minikube/test-artifacts-eu/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,State:CONTAINER_RUNNING,CreatedAt:1628634129281260783,Labels:map[string]string{io.kubernetes.container.name: private-image-eu,io.kubernetes.pod.name: private-image-eu-5956d58f9f-g6tz4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9114b208-93a1-4ef6
-a7cc-b77ef5bb5565,},Annotations:map[string]string{io.kubernetes.container.hash: 16bbf437,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f98d6cfb96e0ca72e24067278d400ac4346d88b7a8e52eaf653b154611724bd4,PodSandboxId:e00c07c230e012a777f69302a37229b9e32ebdc03cf3271e7b32b2ed82337084,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce,State:CONTAINER_RUNNING,CreatedAt:1628634119379939531,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 01307f34-9c2c-44f7-b67a-b6252d68db87,},Annotations
:map[string]string{io.kubernetes.container.hash: 10a02bdf,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f5c7c2dfb4580416a0d76c89267fb736011605c514deb56d6fbaf73021fe080,PodSandboxId:35c9c115dc99baf01feb7216e20e76c4392733fef3a585395adaec8a90078581,Metadata:&ContainerMetadata{Name:private-image,Attempt:0,},Image:&ImageSpec{Image:us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,Annotations:map[string]string{},},ImageRef:us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,State:CONTAINER_RUNNING,CreatedAt:1628634111930096127,Labels:map[string]string{io.kubernetes.container.name: private-image,io.kubernetes
.pod.name: private-image-7ff9c8c74f-9v7sz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f58dc798-47b6-4645-9f6d-71c0d033c8ef,},Annotations:map[string]string{io.kubernetes.container.hash: 98039a49,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36a9f0ee1e0346d9f3ea791a27d841a8b67f3275dab3d819a08634443369cbf3,PodSandboxId:86341f48b30f739a0560512dd84580346ba407628c9df19558d7f4b8a0d75043,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,State:CONTAINER_RUNNING,CreatedAt:1628634078427853242,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernete
s.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b283326c-6afe-4ed1-ba90-188232830913,},Annotations:map[string]string{io.kubernetes.container.hash: b7d8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:523764298f1c685006f822bd9ce10f658e322e1232adc49faa50201df6ea6865,PodSandboxId:3e9dd75cc73baf324d467c1c9f168893110edc1d847e1d261d75fa0799fa30b8,Metadata:&ContainerMetadata{Name:packageserver,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628634015975429512,Labels:map[string]string{io.kubernetes.container.name: packageserver,io.kubernetes.p
od.name: packageserver-677ff7d94-xkd8s,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 786954a1-e70a-4b95-a9e5-acbfe8a49bf6,},Annotations:map[string]string{io.kubernetes.container.hash: 779e9e50,io.kubernetes.container.ports: [{\"containerPort\":5443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71e14b7d4cabbdf121052666e72d7b14ba3508742e8e347e3c489e178c2a679e,PodSandboxId:000f05041462e2d1d2a89adfbf568feaef089a03b7cf1406a36897fe1f74289c,Metadata:&ContainerMetadata{Name:packageserver,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,
CreatedAt:1628634015342444550,Labels:map[string]string{io.kubernetes.container.name: packageserver,io.kubernetes.pod.name: packageserver-677ff7d94-6q8mg,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 26b191d6-0f8d-4e25-8b1a-7d28d0e7984e,},Annotations:map[string]string{io.kubernetes.container.hash: 46b0d32d,io.kubernetes.container.ports: [{\"containerPort\":5443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef8ad851b5c782cf15fa6804ddd2791480d23f0c33325d6be70549fbf99541c7,PodSandboxId:ae8f145fb7f1ef8442de7914c2437788451b9d3e68afcb77b76cbb28a4004868,Metadata:&ContainerMetadata{Name:registry-server,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0,Annotations:map[string]string
{},},ImageRef:quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0,State:CONTAINER_RUNNING,CreatedAt:1628634014040211315,Labels:map[string]string{io.kubernetes.container.name: registry-server,io.kubernetes.pod.name: operatorhubio-catalog-dxpzx,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 9eb587c8-1ac0-4c31-ad75-392aeeab016c,},Annotations:map[string]string{io.kubernetes.container.hash: 9fe3feb7,io.kubernetes.container.ports: [{\"name\":\"grpc\",\"containerPort\":50051,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4005ce090065992b3e905653d50c802d72e3dff527ad6cf506e7eaecb80289a,PodSandboxId:faa00a4c81c6456617b275494a12f743282abcf0620b565ee69a5bfdc67d1350,Metadata:&ContainerMetadata{Name:olm-operator,Attempt:0,},Image:&ImageSpe
c{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628633957781847037,Labels:map[string]string{io.kubernetes.container.name: olm-operator,io.kubernetes.pod.name: olm-operator-859c88c96-94cpj,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: f10aa085-7448-40e8-870c-2035b6e406c0,},Annotations:map[string]string{io.kubernetes.container.hash: be0818ed,io.kubernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":8081,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32466da554bdea568863cae13963c768b1d7ec
29efb8a52809cc292bf192c75d,PodSandboxId:fc8c458708b442a4437c1a4aa5b4603de8956d4bbe6fc24553d727f9c1671ec2,Metadata:&ContainerMetadata{Name:catalog-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628633956829278226,Labels:map[string]string{io.kubernetes.container.name: catalog-operator,io.kubernetes.pod.name: catalog-operator-75d496484d-96kck,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: b26ca429-82c9-4e29-a1e4-e199c1594830,},Annotations:map[string]string{io.kubernetes.container.hash: 5ac4aa45,io.kubernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":8081,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c70c907c6b18c522de143dfb6f9d30e92dbd0696f2950131bcb7a5fb87dc7892,PodSandboxId:cdcdd7a12338acb30cf842634651bd5dba3234021e172f56712e5f768ab96b91,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628633947638196372,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a21dae5f-beaf-40c2-a62b-1deec782e8ac,},Annotations:map[string]string{io.kubernetes.container.hash: 25e23c58,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc4a6a3d9877b8b4e8324753be3e37d6198a43b5da9190d1d153d96d71153724,PodSandboxId:e5298d8aa21e3c509ade50292d91f731b0b87ccc6cfbd38d56a8f76eb57fc835,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628633928832652493,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-fqn7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7785682-711b-4fb6-8eac-10af32eb4197,},Annotations:map[string]string{io.kubernetes.container.hash: e402732f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8748eec23730ca465804ba6be770747a0abe547a3d31565a6ba147713d0166,PodSandboxId:9020a823e792ed061153ee5aa33053dff1abb1d739dd16036e59f46494b3c1df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628633928060218350,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d4lb4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c768b33-9c26-4a9c-b300-dfba02
5b2acb,},Annotations:map[string]string{io.kubernetes.container.hash: 6f55461a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a35d5039eb3cdc9c794d9c6fdc2bbab2fd96a0114cbe2b8fb0e1607c9459dec7,PodSandboxId:d8a325deebbef6eaa230aef11ad9ab391ba2ba23a10cefd3b85f23b34ce1aaf9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628633905974204532,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-20210810221736-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0903f801e4a2f0a7fc8da83c33b35f6,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2063320d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc56e9e60cb3f185140a0c7db4fd6a69f7dc838ba9bbfcdd62163852fd28154e,PodSandboxId:39ae770b43e4fd258203caf2557ff10790f6bdd7348524ba3aa8f955248039ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628633905619734766,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-20210810221736-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0f0bddad13be9583fc305fb982a5bb7,},Annotat
ions:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5195118e3c0b1329e0b492eafe351023845e3a799604e1627bab1b359ef1863,PodSandboxId:8655f2907f64ca222e610464990875ec32ea1eee9eaf915b9209e8387c27af69,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628633905597544891,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-20210810221736-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bca43c269811ea9e4d198ffd73e4326,},Annotations:map
[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6da4cdeb9aaec2c6713d7e3a6af2b3ed2185dae82920ad94b6a45a6de953a51,PodSandboxId:2a29b22731dee06d8e37f4f4b3ac236c91f2e5a2269e7115025358f7a16bb60a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628633905284853880,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-20210810221736-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a80cb0ae9d45046376f291a8eb22ee79,},Annotations:map[string
]string{io.kubernetes.container.hash: cda108ff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7db0bab0-f163-4d07-824c-3a38939f229a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:25:51 addons-20210810221736-30291 crio[2072]: time="2021-08-10 22:25:51.202914482Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=393fb0ec-a955-45e0-9bbf-924d5e8b1979 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:25:51 addons-20210810221736-30291 crio[2072]: time="2021-08-10 22:25:51.203063990Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=393fb0ec-a955-45e0-9bbf-924d5e8b1979 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:25:51 addons-20210810221736-30291 crio[2072]: time="2021-08-10 22:25:51.204093217Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b061b156acfdc296f16162882a4dd3fe7679683c7c4efe7d9534dc097eba7a3,PodSandboxId:3f895a24a6665a42aa2953ccb0f7cf3479d8059edead3756b1416ea51d2a1bc8,Metadata:&ContainerMetadata{Name:etcd-restore-operator,Attempt:0,},Image:&ImageSpec{Image:9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628634144788907572,Labels:map[string]string{io.kubernetes.container.name: etcd-restore-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-2jqws,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: 813a1198-4b83-4df1-99d2-f935cf883e6c,},Annotations:map[string]string{io.kubernetes.container.hash: 45a86a81,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:899fd22ea7edeb5db64a4c147a1c62a417b78fd6a18e926a30ab4ab51e4f44d9,PodSandboxId:3f895a24a6665a42aa2953ccb0f7cf3479d8059edead3756b1416ea51d2a1bc8,Metadata:&ContainerMetadata{Name:etcd-backup-operator,Attempt:0,},Image:&ImageSpec{Image:9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628634144377145686,Labels:map[string]string{io.kubernetes.container.name: etcd-backup-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-2jqws,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: 813a1198-4b83-4df1-99d2-f935cf883e6c,},Annotations:map[string]string{io.kubernetes.container.hash: c48b9281,io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f888421df568f4e525ec67a880bdfb95c0d8b1dfbf7f8e18475e6c31aae3d7b6,PodSandboxId:3f895a24a6665a42aa2953ccb0f7cf3479d8059edead3756b1416ea51d2a1bc8,Metadata:&ContainerMetadata{Name:etcd-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628634143999398194,Labels:map[string]string{io.kubernetes.container.name: etcd-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-2jqws,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: 813a1198-4b83-4df1-99d2-f935cf883e6c,},Annotations:map[string]string{io.kubernetes.container.hash: f69cbd11,io.kubernetes.contai
ner.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f306c76739a5954a648a9696bcc3f3ac8284b0f337f5cc3aec8914a628ff6189,PodSandboxId:61b8c7125edd24c79359b13bd2a6f02c91cb7546aa1728ea44d2674174a75e33,Metadata:&ContainerMetadata{Name:private-image-eu,Attempt:0,},Image:&ImageSpec{Image:europe-west1-docker.pkg.dev/k8s-minikube/test-artifacts-eu/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,Annotations:map[string]string{},},ImageRef:europe-west1-docker.pkg.dev/k8s-minikube/test-artifacts-eu/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,State:CONTAINER_RUNNING,CreatedAt:1628634129281260783,Labels:map[string]string{io.kubernetes.container.name: private-image-eu,io.kubernetes.pod.name: private-image-eu-5956d58f9f-g6tz4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9114b208-93a1-4ef6
-a7cc-b77ef5bb5565,},Annotations:map[string]string{io.kubernetes.container.hash: 16bbf437,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f98d6cfb96e0ca72e24067278d400ac4346d88b7a8e52eaf653b154611724bd4,PodSandboxId:e00c07c230e012a777f69302a37229b9e32ebdc03cf3271e7b32b2ed82337084,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce,State:CONTAINER_RUNNING,CreatedAt:1628634119379939531,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 01307f34-9c2c-44f7-b67a-b6252d68db87,},Annotations
:map[string]string{io.kubernetes.container.hash: 10a02bdf,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f5c7c2dfb4580416a0d76c89267fb736011605c514deb56d6fbaf73021fe080,PodSandboxId:35c9c115dc99baf01feb7216e20e76c4392733fef3a585395adaec8a90078581,Metadata:&ContainerMetadata{Name:private-image,Attempt:0,},Image:&ImageSpec{Image:us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,Annotations:map[string]string{},},ImageRef:us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,State:CONTAINER_RUNNING,CreatedAt:1628634111930096127,Labels:map[string]string{io.kubernetes.container.name: private-image,io.kubernetes
.pod.name: private-image-7ff9c8c74f-9v7sz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f58dc798-47b6-4645-9f6d-71c0d033c8ef,},Annotations:map[string]string{io.kubernetes.container.hash: 98039a49,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36a9f0ee1e0346d9f3ea791a27d841a8b67f3275dab3d819a08634443369cbf3,PodSandboxId:86341f48b30f739a0560512dd84580346ba407628c9df19558d7f4b8a0d75043,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,State:CONTAINER_RUNNING,CreatedAt:1628634078427853242,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernete
s.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b283326c-6afe-4ed1-ba90-188232830913,},Annotations:map[string]string{io.kubernetes.container.hash: b7d8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:523764298f1c685006f822bd9ce10f658e322e1232adc49faa50201df6ea6865,PodSandboxId:3e9dd75cc73baf324d467c1c9f168893110edc1d847e1d261d75fa0799fa30b8,Metadata:&ContainerMetadata{Name:packageserver,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628634015975429512,Labels:map[string]string{io.kubernetes.container.name: packageserver,io.kubernetes.p
od.name: packageserver-677ff7d94-xkd8s,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 786954a1-e70a-4b95-a9e5-acbfe8a49bf6,},Annotations:map[string]string{io.kubernetes.container.hash: 779e9e50,io.kubernetes.container.ports: [{\"containerPort\":5443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71e14b7d4cabbdf121052666e72d7b14ba3508742e8e347e3c489e178c2a679e,PodSandboxId:000f05041462e2d1d2a89adfbf568feaef089a03b7cf1406a36897fe1f74289c,Metadata:&ContainerMetadata{Name:packageserver,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,
CreatedAt:1628634015342444550,Labels:map[string]string{io.kubernetes.container.name: packageserver,io.kubernetes.pod.name: packageserver-677ff7d94-6q8mg,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 26b191d6-0f8d-4e25-8b1a-7d28d0e7984e,},Annotations:map[string]string{io.kubernetes.container.hash: 46b0d32d,io.kubernetes.container.ports: [{\"containerPort\":5443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef8ad851b5c782cf15fa6804ddd2791480d23f0c33325d6be70549fbf99541c7,PodSandboxId:ae8f145fb7f1ef8442de7914c2437788451b9d3e68afcb77b76cbb28a4004868,Metadata:&ContainerMetadata{Name:registry-server,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0,Annotations:map[string]string
{},},ImageRef:quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0,State:CONTAINER_RUNNING,CreatedAt:1628634014040211315,Labels:map[string]string{io.kubernetes.container.name: registry-server,io.kubernetes.pod.name: operatorhubio-catalog-dxpzx,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 9eb587c8-1ac0-4c31-ad75-392aeeab016c,},Annotations:map[string]string{io.kubernetes.container.hash: 9fe3feb7,io.kubernetes.container.ports: [{\"name\":\"grpc\",\"containerPort\":50051,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4005ce090065992b3e905653d50c802d72e3dff527ad6cf506e7eaecb80289a,PodSandboxId:faa00a4c81c6456617b275494a12f743282abcf0620b565ee69a5bfdc67d1350,Metadata:&ContainerMetadata{Name:olm-operator,Attempt:0,},Image:&ImageSpe
c{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628633957781847037,Labels:map[string]string{io.kubernetes.container.name: olm-operator,io.kubernetes.pod.name: olm-operator-859c88c96-94cpj,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: f10aa085-7448-40e8-870c-2035b6e406c0,},Annotations:map[string]string{io.kubernetes.container.hash: be0818ed,io.kubernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":8081,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32466da554bdea568863cae13963c768b1d7ec
29efb8a52809cc292bf192c75d,PodSandboxId:fc8c458708b442a4437c1a4aa5b4603de8956d4bbe6fc24553d727f9c1671ec2,Metadata:&ContainerMetadata{Name:catalog-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628633956829278226,Labels:map[string]string{io.kubernetes.container.name: catalog-operator,io.kubernetes.pod.name: catalog-operator-75d496484d-96kck,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: b26ca429-82c9-4e29-a1e4-e199c1594830,},Annotations:map[string]string{io.kubernetes.container.hash: 5ac4aa45,io.kubernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":8081,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c70c907c6b18c522de143dfb6f9d30e92dbd0696f2950131bcb7a5fb87dc7892,PodSandboxId:cdcdd7a12338acb30cf842634651bd5dba3234021e172f56712e5f768ab96b91,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628633947638196372,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a21dae5f-beaf-40c2-a62b-1deec782e8ac,},Annotations:map[string]string{io.kubernetes.container.hash: 25e23c58,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc4a6a3d9877b8b4e8324753be3e37d6198a43b5da9190d1d153d96d71153724,PodSandboxId:e5298d8aa21e3c509ade50292d91f731b0b87ccc6cfbd38d56a8f76eb57fc835,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628633928832652493,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-fqn7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7785682-711b-4fb6-8eac-10af32eb4197,},Annotations:map[string]string{io.kubernetes.container.hash: e402732f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8748eec23730ca465804ba6be770747a0abe547a3d31565a6ba147713d0166,PodSandboxId:9020a823e792ed061153ee5aa33053dff1abb1d739dd16036e59f46494b3c1df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628633928060218350,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d4lb4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c768b33-9c26-4a9c-b300-dfba02
5b2acb,},Annotations:map[string]string{io.kubernetes.container.hash: 6f55461a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a35d5039eb3cdc9c794d9c6fdc2bbab2fd96a0114cbe2b8fb0e1607c9459dec7,PodSandboxId:d8a325deebbef6eaa230aef11ad9ab391ba2ba23a10cefd3b85f23b34ce1aaf9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628633905974204532,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-20210810221736-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0903f801e4a2f0a7fc8da83c33b35f6,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2063320d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc56e9e60cb3f185140a0c7db4fd6a69f7dc838ba9bbfcdd62163852fd28154e,PodSandboxId:39ae770b43e4fd258203caf2557ff10790f6bdd7348524ba3aa8f955248039ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628633905619734766,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-20210810221736-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0f0bddad13be9583fc305fb982a5bb7,},Annotat
ions:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5195118e3c0b1329e0b492eafe351023845e3a799604e1627bab1b359ef1863,PodSandboxId:8655f2907f64ca222e610464990875ec32ea1eee9eaf915b9209e8387c27af69,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628633905597544891,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-20210810221736-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bca43c269811ea9e4d198ffd73e4326,},Annotations:map
[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6da4cdeb9aaec2c6713d7e3a6af2b3ed2185dae82920ad94b6a45a6de953a51,PodSandboxId:2a29b22731dee06d8e37f4f4b3ac236c91f2e5a2269e7115025358f7a16bb60a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628633905284853880,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-20210810221736-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a80cb0ae9d45046376f291a8eb22ee79,},Annotations:map[string
]string{io.kubernetes.container.hash: cda108ff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=393fb0ec-a955-45e0-9bbf-924d5e8b1979 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:25:51 addons-20210810221736-30291 crio[2072]: time="2021-08-10 22:25:51.242237617Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4c8fc06b-bfe4-4a17-8cd9-25328e809e01 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:25:51 addons-20210810221736-30291 crio[2072]: time="2021-08-10 22:25:51.242688914Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4c8fc06b-bfe4-4a17-8cd9-25328e809e01 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:25:51 addons-20210810221736-30291 crio[2072]: time="2021-08-10 22:25:51.243076463Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b061b156acfdc296f16162882a4dd3fe7679683c7c4efe7d9534dc097eba7a3,PodSandboxId:3f895a24a6665a42aa2953ccb0f7cf3479d8059edead3756b1416ea51d2a1bc8,Metadata:&ContainerMetadata{Name:etcd-restore-operator,Attempt:0,},Image:&ImageSpec{Image:9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628634144788907572,Labels:map[string]string{io.kubernetes.container.name: etcd-restore-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-2jqws,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: 813a1198-4b83-4df1-99d2-f935cf883e6c,},Annotations:map[string]string{io.kubernetes.container.hash: 45a86a81,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:899fd22ea7edeb5db64a4c147a1c62a417b78fd6a18e926a30ab4ab51e4f44d9,PodSandboxId:3f895a24a6665a42aa2953ccb0f7cf3479d8059edead3756b1416ea51d2a1bc8,Metadata:&ContainerMetadata{Name:etcd-backup-operator,Attempt:0,},Image:&ImageSpec{Image:9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628634144377145686,Labels:map[string]string{io.kubernetes.container.name: etcd-backup-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-2jqws,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: 813a1198-4b83-4df1-99d2-f935cf883e6c,},Annotations:map[string]string{io.kubernetes.container.hash: c48b9281,io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f888421df568f4e525ec67a880bdfb95c0d8b1dfbf7f8e18475e6c31aae3d7b6,PodSandboxId:3f895a24a6665a42aa2953ccb0f7cf3479d8059edead3756b1416ea51d2a1bc8,Metadata:&ContainerMetadata{Name:etcd-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628634143999398194,Labels:map[string]string{io.kubernetes.container.name: etcd-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-2jqws,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: 813a1198-4b83-4df1-99d2-f935cf883e6c,},Annotations:map[string]string{io.kubernetes.container.hash: f69cbd11,io.kubernetes.contai
ner.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f306c76739a5954a648a9696bcc3f3ac8284b0f337f5cc3aec8914a628ff6189,PodSandboxId:61b8c7125edd24c79359b13bd2a6f02c91cb7546aa1728ea44d2674174a75e33,Metadata:&ContainerMetadata{Name:private-image-eu,Attempt:0,},Image:&ImageSpec{Image:europe-west1-docker.pkg.dev/k8s-minikube/test-artifacts-eu/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,Annotations:map[string]string{},},ImageRef:europe-west1-docker.pkg.dev/k8s-minikube/test-artifacts-eu/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,State:CONTAINER_RUNNING,CreatedAt:1628634129281260783,Labels:map[string]string{io.kubernetes.container.name: private-image-eu,io.kubernetes.pod.name: private-image-eu-5956d58f9f-g6tz4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9114b208-93a1-4ef6
-a7cc-b77ef5bb5565,},Annotations:map[string]string{io.kubernetes.container.hash: 16bbf437,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f98d6cfb96e0ca72e24067278d400ac4346d88b7a8e52eaf653b154611724bd4,PodSandboxId:e00c07c230e012a777f69302a37229b9e32ebdc03cf3271e7b32b2ed82337084,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce,State:CONTAINER_RUNNING,CreatedAt:1628634119379939531,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 01307f34-9c2c-44f7-b67a-b6252d68db87,},Annotations
:map[string]string{io.kubernetes.container.hash: 10a02bdf,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f5c7c2dfb4580416a0d76c89267fb736011605c514deb56d6fbaf73021fe080,PodSandboxId:35c9c115dc99baf01feb7216e20e76c4392733fef3a585395adaec8a90078581,Metadata:&ContainerMetadata{Name:private-image,Attempt:0,},Image:&ImageSpec{Image:us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,Annotations:map[string]string{},},ImageRef:us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,State:CONTAINER_RUNNING,CreatedAt:1628634111930096127,Labels:map[string]string{io.kubernetes.container.name: private-image,io.kubernetes
.pod.name: private-image-7ff9c8c74f-9v7sz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f58dc798-47b6-4645-9f6d-71c0d033c8ef,},Annotations:map[string]string{io.kubernetes.container.hash: 98039a49,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36a9f0ee1e0346d9f3ea791a27d841a8b67f3275dab3d819a08634443369cbf3,PodSandboxId:86341f48b30f739a0560512dd84580346ba407628c9df19558d7f4b8a0d75043,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,State:CONTAINER_RUNNING,CreatedAt:1628634078427853242,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernete
s.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b283326c-6afe-4ed1-ba90-188232830913,},Annotations:map[string]string{io.kubernetes.container.hash: b7d8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:523764298f1c685006f822bd9ce10f658e322e1232adc49faa50201df6ea6865,PodSandboxId:3e9dd75cc73baf324d467c1c9f168893110edc1d847e1d261d75fa0799fa30b8,Metadata:&ContainerMetadata{Name:packageserver,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628634015975429512,Labels:map[string]string{io.kubernetes.container.name: packageserver,io.kubernetes.p
od.name: packageserver-677ff7d94-xkd8s,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 786954a1-e70a-4b95-a9e5-acbfe8a49bf6,},Annotations:map[string]string{io.kubernetes.container.hash: 779e9e50,io.kubernetes.container.ports: [{\"containerPort\":5443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71e14b7d4cabbdf121052666e72d7b14ba3508742e8e347e3c489e178c2a679e,PodSandboxId:000f05041462e2d1d2a89adfbf568feaef089a03b7cf1406a36897fe1f74289c,Metadata:&ContainerMetadata{Name:packageserver,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,
CreatedAt:1628634015342444550,Labels:map[string]string{io.kubernetes.container.name: packageserver,io.kubernetes.pod.name: packageserver-677ff7d94-6q8mg,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 26b191d6-0f8d-4e25-8b1a-7d28d0e7984e,},Annotations:map[string]string{io.kubernetes.container.hash: 46b0d32d,io.kubernetes.container.ports: [{\"containerPort\":5443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef8ad851b5c782cf15fa6804ddd2791480d23f0c33325d6be70549fbf99541c7,PodSandboxId:ae8f145fb7f1ef8442de7914c2437788451b9d3e68afcb77b76cbb28a4004868,Metadata:&ContainerMetadata{Name:registry-server,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0,Annotations:map[string]string
{},},ImageRef:quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0,State:CONTAINER_RUNNING,CreatedAt:1628634014040211315,Labels:map[string]string{io.kubernetes.container.name: registry-server,io.kubernetes.pod.name: operatorhubio-catalog-dxpzx,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 9eb587c8-1ac0-4c31-ad75-392aeeab016c,},Annotations:map[string]string{io.kubernetes.container.hash: 9fe3feb7,io.kubernetes.container.ports: [{\"name\":\"grpc\",\"containerPort\":50051,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4005ce090065992b3e905653d50c802d72e3dff527ad6cf506e7eaecb80289a,PodSandboxId:faa00a4c81c6456617b275494a12f743282abcf0620b565ee69a5bfdc67d1350,Metadata:&ContainerMetadata{Name:olm-operator,Attempt:0,},Image:&ImageSpe
c{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628633957781847037,Labels:map[string]string{io.kubernetes.container.name: olm-operator,io.kubernetes.pod.name: olm-operator-859c88c96-94cpj,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: f10aa085-7448-40e8-870c-2035b6e406c0,},Annotations:map[string]string{io.kubernetes.container.hash: be0818ed,io.kubernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":8081,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32466da554bdea568863cae13963c768b1d7ec
29efb8a52809cc292bf192c75d,PodSandboxId:fc8c458708b442a4437c1a4aa5b4603de8956d4bbe6fc24553d727f9c1671ec2,Metadata:&ContainerMetadata{Name:catalog-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628633956829278226,Labels:map[string]string{io.kubernetes.container.name: catalog-operator,io.kubernetes.pod.name: catalog-operator-75d496484d-96kck,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: b26ca429-82c9-4e29-a1e4-e199c1594830,},Annotations:map[string]string{io.kubernetes.container.hash: 5ac4aa45,io.kubernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":8081,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c70c907c6b18c522de143dfb6f9d30e92dbd0696f2950131bcb7a5fb87dc7892,PodSandboxId:cdcdd7a12338acb30cf842634651bd5dba3234021e172f56712e5f768ab96b91,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628633947638196372,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a21dae5f-beaf-40c2-a62b-1deec782e8ac,},Annotations:map[string]string{io.kubernetes.container.hash: 25e23c58,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc4a6a3d9877b8b4e8324753be3e37d6198a43b5da9190d1d153d96d71153724,PodSandboxId:e5298d8aa21e3c509ade50292d91f731b0b87ccc6cfbd38d56a8f76eb57fc835,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628633928832652493,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-fqn7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7785682-711b-4fb6-8eac-10af32eb4197,},Annotations:map[string]string{io.kubernetes.container.hash: e402732f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da8748eec23730ca465804ba6be770747a0abe547a3d31565a6ba147713d0166,PodSandboxId:9020a823e792ed061153ee5aa33053dff1abb1d739dd16036e59f46494b3c1df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628633928060218350,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d4lb4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c768b33-9c26-4a9c-b300-dfba02
5b2acb,},Annotations:map[string]string{io.kubernetes.container.hash: 6f55461a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a35d5039eb3cdc9c794d9c6fdc2bbab2fd96a0114cbe2b8fb0e1607c9459dec7,PodSandboxId:d8a325deebbef6eaa230aef11ad9ab391ba2ba23a10cefd3b85f23b34ce1aaf9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628633905974204532,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-20210810221736-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0903f801e4a2f0a7fc8da83c33b35f6,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2063320d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc56e9e60cb3f185140a0c7db4fd6a69f7dc838ba9bbfcdd62163852fd28154e,PodSandboxId:39ae770b43e4fd258203caf2557ff10790f6bdd7348524ba3aa8f955248039ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628633905619734766,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-20210810221736-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0f0bddad13be9583fc305fb982a5bb7,},Annotat
ions:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5195118e3c0b1329e0b492eafe351023845e3a799604e1627bab1b359ef1863,PodSandboxId:8655f2907f64ca222e610464990875ec32ea1eee9eaf915b9209e8387c27af69,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628633905597544891,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-20210810221736-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bca43c269811ea9e4d198ffd73e4326,},Annotations:map
[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6da4cdeb9aaec2c6713d7e3a6af2b3ed2185dae82920ad94b6a45a6de953a51,PodSandboxId:2a29b22731dee06d8e37f4f4b3ac236c91f2e5a2269e7115025358f7a16bb60a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628633905284853880,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-20210810221736-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a80cb0ae9d45046376f291a8eb22ee79,},Annotations:map[string
]string{io.kubernetes.container.hash: cda108ff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4c8fc06b-bfe4-4a17-8cd9-25328e809e01 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                                           CREATED             STATE               NAME                      ATTEMPT             POD ID
	7b061b156acfd       9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5                                                                                3 minutes ago       Running             etcd-restore-operator     0                   3f895a24a6665
	899fd22ea7ede       9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5                                                                                3 minutes ago       Running             etcd-backup-operator      0                   3f895a24a6665
	f888421df568f       quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b                                            3 minutes ago       Running             etcd-operator             0                   3f895a24a6665
	f306c76739a59       europe-west1-docker.pkg.dev/k8s-minikube/test-artifacts-eu/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8   3 minutes ago       Running             private-image-eu          0                   61b8c7125edd2
	f98d6cfb96e0c       docker.io/library/nginx@sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce                                                 3 minutes ago       Running             nginx                     0                   e00c07c230e01
	7f5c7c2dfb458       us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8                3 minutes ago       Running             private-image             0                   35c9c115dc99b
	36a9f0ee1e034       docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                               4 minutes ago       Running             busybox                   0                   86341f48b30f7
	523764298f1c6       quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607                                          5 minutes ago       Running             packageserver             0                   3e9dd75cc73ba
	71e14b7d4cabb       quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607                                          5 minutes ago       Running             packageserver             0                   000f05041462e
	ef8ad851b5c78       quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0                 5 minutes ago       Running             registry-server           0                   ae8f145fb7f1e
	a4005ce090065       quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607                                          6 minutes ago       Running             olm-operator              0                   faa00a4c81c64
	32466da554bde       quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607                                          6 minutes ago       Running             catalog-operator          0                   fc8c458708b44
	c70c907c6b18c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                                6 minutes ago       Running             storage-provisioner       0                   cdcdd7a12338a
	fc4a6a3d9877b       296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899                                                                                7 minutes ago       Running             coredns                   0                   e5298d8aa21e3
	da8748eec2373       adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92                                                                                7 minutes ago       Running             kube-proxy                0                   9020a823e792e
	a35d5039eb3cd       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934                                                                                7 minutes ago       Running             etcd                      0                   d8a325deebbef
	cc56e9e60cb3f       bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9                                                                                7 minutes ago       Running             kube-controller-manager   0                   39ae770b43e4f
	b5195118e3c0b       6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a                                                                                7 minutes ago       Running             kube-scheduler            0                   8655f2907f64c
	b6da4cdeb9aae       3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80                                                                                7 minutes ago       Running             kube-apiserver            0                   2a29b22731dee
	
	* 
	* ==> coredns [fc4a6a3d9877b8b4e8324753be3e37d6198a43b5da9190d1d153d96d71153724] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = 7ae91e86dd75dee9ae501cb58003198b
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               addons-20210810221736-30291
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-20210810221736-30291
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=877a5691753f15214a0c269ac69dcdc5a4d99fcd
	                    minikube.k8s.io/name=addons-20210810221736-30291
	                    minikube.k8s.io/updated_at=2021_08_10T22_18_34_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-20210810221736-30291
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Aug 2021 22:18:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-20210810221736-30291
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Aug 2021 22:25:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Aug 2021 22:22:40 +0000   Tue, 10 Aug 2021 22:18:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Aug 2021 22:22:40 +0000   Tue, 10 Aug 2021 22:18:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Aug 2021 22:22:40 +0000   Tue, 10 Aug 2021 22:18:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Aug 2021 22:22:40 +0000   Tue, 10 Aug 2021 22:18:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.30
	  Hostname:    addons-20210810221736-30291
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3935016Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3935016Ki
	  pods:               110
	System Info:
	  Machine ID:                 866096ce130c475b865d0bb399cb320f
	  System UUID:                866096ce-130c-475b-865d-0bb399cb320f
	  Boot ID:                    fee758dc-ec75-48c5-8448-adac4873d248
	  Kernel Version:             4.19.182
	  OS Image:                   Buildroot 2020.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.20.2
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (17 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m38s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  default                     private-image-7ff9c8c74f-9v7sz                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m25s
	  default                     private-image-eu-5956d58f9f-g6tz4                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m52s
	  kube-system                 coredns-558bd4d5db-fqn7v                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     7m5s
	  kube-system                 etcd-addons-20210810221736-30291                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         7m12s
	  kube-system                 kube-apiserver-addons-20210810221736-30291             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m19s
	  kube-system                 kube-controller-manager-addons-20210810221736-30291    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m12s
	  kube-system                 kube-proxy-d4lb4                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m5s
	  kube-system                 kube-scheduler-addons-20210810221736-30291             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m12s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m
	  my-etcd                     etcd-operator-85cd4f54cd-2jqws                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m40s
	  olm                         catalog-operator-75d496484d-96kck                      10m (0%!)(MISSING)      0 (0%!)(MISSING)      80Mi (2%!)(MISSING)        0 (0%!)(MISSING)         6m54s
	  olm                         olm-operator-859c88c96-94cpj                           10m (0%!)(MISSING)      0 (0%!)(MISSING)      160Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m55s
	  olm                         operatorhubio-catalog-dxpzx                            10m (0%!)(MISSING)      0 (0%!)(MISSING)      50Mi (1%!)(MISSING)        0 (0%!)(MISSING)         6m34s
	  olm                         packageserver-677ff7d94-6q8mg                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m32s
	  olm                         packageserver-677ff7d94-xkd8s                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                780m (39%!)(MISSING)   0 (0%!)(MISSING)
	  memory             460Mi (11%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m29s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientPID     7m29s (x3 over 7m29s)  kubelet     Node addons-20210810221736-30291 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m29s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m28s (x4 over 7m29s)  kubelet     Node addons-20210810221736-30291 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m28s (x4 over 7m29s)  kubelet     Node addons-20210810221736-30291 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 7m12s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m12s                  kubelet     Node addons-20210810221736-30291 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m12s                  kubelet     Node addons-20210810221736-30291 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m12s                  kubelet     Node addons-20210810221736-30291 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m12s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m5s                   kubelet     Node addons-20210810221736-30291 status is now: NodeReady
	  Normal  Starting                 7m3s                   kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +8.701665] kauditd_printk_skb: 2 callbacks suppressed
	[ +10.505127] kauditd_printk_skb: 44 callbacks suppressed
	[  +5.633970] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.118074] kauditd_printk_skb: 2 callbacks suppressed
	[ +13.735260] kauditd_printk_skb: 14 callbacks suppressed
	[  +6.226213] kauditd_printk_skb: 20 callbacks suppressed
	[  +0.213376] NFSD: Unable to end grace period: -110
	[  +5.789048] kauditd_printk_skb: 20 callbacks suppressed
	[Aug10 22:20] kauditd_printk_skb: 20 callbacks suppressed
	[ +15.152931] kauditd_printk_skb: 2 callbacks suppressed
	[Aug10 22:21] kauditd_printk_skb: 8 callbacks suppressed
	[  +7.109085] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.086856] kauditd_printk_skb: 53 callbacks suppressed
	[  +5.723004] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.075831] kauditd_printk_skb: 5 callbacks suppressed
	[ +15.082300] kauditd_printk_skb: 95 callbacks suppressed
	[  +8.993051] kauditd_printk_skb: 101 callbacks suppressed
	[Aug10 22:22] kauditd_printk_skb: 41 callbacks suppressed
	[  +8.691353] kauditd_printk_skb: 11 callbacks suppressed
	[  +8.083372] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.011647] kauditd_printk_skb: 32 callbacks suppressed
	[  +9.418250] kauditd_printk_skb: 26 callbacks suppressed
	[  +7.178114] kauditd_printk_skb: 122 callbacks suppressed
	[Aug10 22:25] kauditd_printk_skb: 38 callbacks suppressed
	[ +11.098895] kauditd_printk_skb: 2 callbacks suppressed
	
	* 
	* ==> etcd [7b061b156acfdc296f16162882a4dd3fe7679683c7c4efe7d9534dc097eba7a3] <==
	* time="2021-08-10T22:22:24Z" level=info msg="Go Version: go1.11.5"
	time="2021-08-10T22:22:24Z" level=info msg="Go OS/Arch: linux/amd64"
	time="2021-08-10T22:22:24Z" level=info msg="etcd-restore-operator Version: 0.9.4"
	time="2021-08-10T22:22:24Z" level=info msg="Git SHA: c8a1c64"
	E0810 22:22:24.964951       1 event.go:259] Could not construct reference to: '&v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"etcd-restore-operator", GenerateName:"", Namespace:"my-etcd", SelfLink:"", UID:"52e744b3-b855-4b52-a55c-638c234bfd61", ResourceVersion:"1984", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764230944, loc:(*time.Location)(0x24e11a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"etcd-operator-alm-owned"}, Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"etcd-operator-85cd4f54cd-2jqws\",\"leaseDurationSeconds\":15,\"acquireTime\":\"2021-08-10T22:22:24Z\",\"renewTime\":\"2021-08-10T22:22:24Z\",\"leaderTransitions\":1}", "endpoints.kubernetes.io/last-change-trigger-time":"2021-08-10T22:22:24Z"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), Cl
usterName:""}, Subsets:[]v1.EndpointSubset(nil)}' due to: 'selfLink was empty, can't make reference'. Will not report event: 'Normal' 'LeaderElection' 'etcd-operator-85cd4f54cd-2jqws became leader'
	time="2021-08-10T22:22:24Z" level=info msg="starting restore controller" pkg=controller
	time="2021-08-10T22:22:24Z" level=info msg="listening on 0.0.0.0:19999"
	
	* 
	* ==> etcd [899fd22ea7edeb5db64a4c147a1c62a417b78fd6a18e926a30ab4ab51e4f44d9] <==
	* time="2021-08-10T22:22:24Z" level=info msg="Go Version: go1.11.5"
	time="2021-08-10T22:22:24Z" level=info msg="Go OS/Arch: linux/amd64"
	time="2021-08-10T22:22:24Z" level=info msg="etcd-backup-operator Version: 0.9.4"
	time="2021-08-10T22:22:24Z" level=info msg="Git SHA: c8a1c64"
	E0810 22:22:24.501063       1 event.go:259] Could not construct reference to: '&v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"etcd-backup-operator", GenerateName:"", Namespace:"my-etcd", SelfLink:"", UID:"16756f31-53d6-4862-8eb7-7022460091e6", ResourceVersion:"1974", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764230944, loc:(*time.Location)(0x25824c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"etcd-operator-85cd4f54cd-2jqws\",\"leaseDurationSeconds\":15,\"acquireTime\":\"2021-08-10T22:22:24Z\",\"renewTime\":\"2021-08-10T22:22:24Z\",\"leaderTransitions\":0}"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Subsets:[]v1.EndpointSubset(nil)}' due to: 'selfLink was empty, can't make reference'. Wil
l not report event: 'Normal' 'LeaderElection' 'etcd-operator-85cd4f54cd-2jqws became leader'
	time="2021-08-10T22:22:24Z" level=info msg="starting backup controller" pkg=controller
	
	* 
	* ==> etcd [a35d5039eb3cdc9c794d9c6fdc2bbab2fd96a0114cbe2b8fb0e1607c9459dec7] <==
	* 2021-08-10 22:22:21.578347 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/my-etcd/etcd-operator\" " with result "range_response_count:1 size:918" took too long (368.271781ms) to execute
	2021-08-10 22:22:21.578718 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/operatorgroups/my-etcd/\" range_end:\"/registry/operators.coreos.com/operatorgroups/my-etcd0\" " with result "range_response_count:1 size:1131" took too long (360.375653ms) to execute
	2021-08-10 22:22:21.579120 W | etcdserver: read-only range request "key:\"/registry/namespaces/gcp-auth\" " with result "range_response_count:1 size:731" took too long (119.764147ms) to execute
	2021-08-10 22:22:21.579544 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/clusterserviceversions/my-etcd/etcdoperator.v0.9.4\" " with result "range_response_count:1 size:22154" took too long (196.56399ms) to execute
	2021-08-10 22:22:30.793863 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:22:40.774011 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:22:50.774033 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:23:00.774251 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:23:10.773925 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:23:20.773887 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:23:30.773918 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:23:40.774252 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:23:50.773841 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:24:00.778757 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:24:10.774408 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:24:20.774066 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:24:30.774003 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:24:40.773837 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:24:50.773888 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:25:00.772839 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:25:10.773324 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:25:20.774119 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:25:30.774860 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:25:40.773799 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:25:50.774831 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> etcd [f888421df568f4e525ec67a880bdfb95c0d8b1dfbf7f8e18475e6c31aae3d7b6] <==
	* time="2021-08-10T22:22:24Z" level=info msg="etcd-operator Version: 0.9.4"
	time="2021-08-10T22:22:24Z" level=info msg="Git SHA: c8a1c64"
	time="2021-08-10T22:22:24Z" level=info msg="Go Version: go1.11.5"
	time="2021-08-10T22:22:24Z" level=info msg="Go OS/Arch: linux/amd64"
	E0810 22:22:24.187480       1 event.go:259] Could not construct reference to: '&v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"etcd-operator", GenerateName:"", Namespace:"my-etcd", SelfLink:"", UID:"6ff3510c-6255-4504-bafb-7057c73123c4", ResourceVersion:"1971", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764230944, loc:(*time.Location)(0x20d4640)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"etcd-operator-85cd4f54cd-2jqws\",\"leaseDurationSeconds\":15,\"acquireTime\":\"2021-08-10T22:22:24Z\",\"renewTime\":\"2021-08-10T22:22:24Z\",\"leaderTransitions\":0}"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Subsets:[]v1.EndpointSubset(nil)}' due to: 'selfLink was empty, can't make reference'. Will not r
eport event: 'Normal' 'LeaderElection' 'etcd-operator-85cd4f54cd-2jqws became leader'
	
	* 
	* ==> kernel <==
	*  22:25:51 up 8 min,  0 users,  load average: 0.88, 2.48, 1.58
	Linux addons-20210810221736-30291 4.19.182 #1 SMP Fri Aug 6 09:11:32 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [b6da4cdeb9aaec2c6713d7e3a6af2b3ed2185dae82920ad94b6a45a6de953a51] <==
	* I0810 22:22:18.167826       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0810 22:22:21.444541       1 client.go:360] parsed scheme: "passthrough"
	I0810 22:22:21.444732       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0810 22:22:21.444823       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0810 22:22:38.809530       1 cacher.go:148] Terminating all watchers from cacher *unstructured.Unstructured
	W0810 22:22:38.951813       1 cacher.go:148] Terminating all watchers from cacher *unstructured.Unstructured
	W0810 22:22:38.992839       1 cacher.go:148] Terminating all watchers from cacher *unstructured.Unstructured
	I0810 22:22:56.645666       1 client.go:360] parsed scheme: "passthrough"
	I0810 22:22:56.645794       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0810 22:22:56.645806       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0810 22:23:36.749254       1 client.go:360] parsed scheme: "passthrough"
	I0810 22:23:36.749685       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0810 22:23:36.749742       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0810 22:24:10.404416       1 client.go:360] parsed scheme: "passthrough"
	I0810 22:24:10.404699       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0810 22:24:10.404746       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0810 22:24:48.427958       1 client.go:360] parsed scheme: "passthrough"
	I0810 22:24:48.428077       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0810 22:24:48.428087       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	E0810 22:25:22.035330       1 authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E0810 22:25:25.496290       1 authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	I0810 22:25:31.872650       1 client.go:360] parsed scheme: "passthrough"
	I0810 22:25:31.872890       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0810 22:25:31.872940       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	E0810 22:25:33.003186       1 authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	* 
	* ==> kube-controller-manager [cc56e9e60cb3f185140a0c7db4fd6a69f7dc838ba9bbfcdd62163852fd28154e] <==
	* E0810 22:22:48.223217       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0810 22:22:48.270142       1 shared_informer.go:240] Waiting for caches to sync for resource quota
	I0810 22:22:48.270419       1 shared_informer.go:247] Caches are synced for resource quota 
	I0810 22:22:48.546474       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0810 22:22:48.546520       1 shared_informer.go:247] Caches are synced for garbage collector 
	E0810 22:22:49.146424       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0810 22:22:49.406670       1 namespace_controller.go:185] Namespace has been deleted gcp-auth
	E0810 22:22:58.063637       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0810 22:22:59.101192       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0810 22:22:59.115858       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0810 22:23:12.285400       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0810 22:23:16.006280       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0810 22:23:19.783333       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0810 22:23:51.440114       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0810 22:23:51.697527       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0810 22:23:57.531959       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0810 22:24:29.788148       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0810 22:24:31.322853       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0810 22:24:44.930402       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0810 22:25:05.131803       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0810 22:25:05.676884       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0810 22:25:18.390901       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0810 22:25:26.855277       1 tokens_controller.go:262] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-b2659" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	E0810 22:25:46.860081       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0810 22:25:46.940247       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [da8748eec23730ca465804ba6be770747a0abe547a3d31565a6ba147713d0166] <==
	* I0810 22:18:48.579287       1 node.go:172] Successfully retrieved node IP: 192.168.50.30
	I0810 22:18:48.579406       1 server_others.go:140] Detected node IP 192.168.50.30
	W0810 22:18:48.579429       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	W0810 22:18:48.673053       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0810 22:18:48.673075       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0810 22:18:48.673089       1 server_others.go:212] Using iptables Proxier.
	I0810 22:18:48.673785       1 server.go:643] Version: v1.21.3
	I0810 22:18:48.675118       1 config.go:315] Starting service config controller
	I0810 22:18:48.675131       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0810 22:18:48.675147       1 config.go:224] Starting endpoint slice config controller
	I0810 22:18:48.675151       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0810 22:18:48.677984       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0810 22:18:48.679668       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0810 22:18:48.779776       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0810 22:18:48.779814       1 shared_informer.go:247] Caches are synced for service config 
	W0810 22:24:27.712705       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	
	* 
	* ==> kube-scheduler [b5195118e3c0b1329e0b492eafe351023845e3a799604e1627bab1b359ef1863] <==
	* E0810 22:18:30.626739       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0810 22:18:30.630921       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0810 22:18:30.631264       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0810 22:18:30.632499       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0810 22:18:30.632921       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0810 22:18:30.633178       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0810 22:18:30.633296       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0810 22:18:30.633985       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0810 22:18:30.633988       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0810 22:18:30.634323       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0810 22:18:30.634335       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0810 22:18:30.634536       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0810 22:18:30.634937       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0810 22:18:31.468382       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0810 22:18:31.493997       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0810 22:18:31.511534       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0810 22:18:31.599870       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0810 22:18:31.725703       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0810 22:18:31.758068       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0810 22:18:31.773637       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0810 22:18:31.801024       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0810 22:18:31.961004       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0810 22:18:31.963022       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0810 22:18:32.087526       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0810 22:18:34.023639       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2021-08-10 22:17:47 UTC, end at Tue 2021-08-10 22:25:51 UTC. --
	Aug 10 22:23:30 addons-20210810221736-30291 kubelet[2803]: I0810 22:23:30.660781    2803 kubelet_pods.go:895] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/nginx" secret="" err="secret \"gcp-auth\" not found"
	Aug 10 22:23:58 addons-20210810221736-30291 kubelet[2803]: I0810 22:23:58.660923    2803 kubelet_pods.go:895] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Aug 10 22:24:26 addons-20210810221736-30291 kubelet[2803]: I0810 22:24:26.660978    2803 kubelet_pods.go:895] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/private-image-7ff9c8c74f-9v7sz" secret="" err="secret \"gcp-auth\" not found"
	Aug 10 22:24:32 addons-20210810221736-30291 kubelet[2803]: I0810 22:24:32.660491    2803 kubelet_pods.go:895] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/private-image-eu-5956d58f9f-g6tz4" secret="" err="secret \"gcp-auth\" not found"
	Aug 10 22:24:44 addons-20210810221736-30291 kubelet[2803]: I0810 22:24:44.660929    2803 kubelet_pods.go:895] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/nginx" secret="" err="secret \"gcp-auth\" not found"
	Aug 10 22:25:21 addons-20210810221736-30291 kubelet[2803]: E0810 22:25:21.917794    2803 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-59b45fb494-7qnv2.169a124972600d6e", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-59b45fb494-7qnv2", UID:"35d31ad6-a5a5-44fe-803b-2baa35eb6f04", APIVersion:"v1", ResourceVersion:"589", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"addons-20210810221736-30291"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc03cdd147629236e, ext:407902935601, loc:(*time.Location)(0x74c3600)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc03cdd147629236e, ext:407902935601, loc:(*time.Location)(0x74c3600)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-59b45fb494-7qnv2.169a124972600d6e" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 10 22:25:22 addons-20210810221736-30291 kubelet[2803]: E0810 22:25:22.944779    2803 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-59b45fb494-7qnv2.169a1249afeb3221", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-59b45fb494-7qnv2", UID:"35d31ad6-a5a5-44fe-803b-2baa35eb6f04", APIVersion:"v1", ResourceVersion:"589", FieldPath:"spec.containers{controller}"}, Reason:"Unhealthy", Message:"Read
iness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"addons-20210810221736-30291"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc03cdd14b8197e21, ext:408935464647, loc:(*time.Location)(0x74c3600)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc03cdd14b8197e21, ext:408935464647, loc:(*time.Location)(0x74c3600)}}, Count:1, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-59b45fb494-7qnv2.169a1249afeb3221" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 10 22:25:22 addons-20210810221736-30291 kubelet[2803]: E0810 22:25:22.947971    2803 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-59b45fb494-7qnv2.169a1249afed6476", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-59b45fb494-7qnv2", UID:"35d31ad6-a5a5-44fe-803b-2baa35eb6f04", APIVersion:"v1", ResourceVersion:"589", FieldPath:"spec.containers{controller}"}, Reason:"Unhealthy", Message:"Live
ness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"addons-20210810221736-30291"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc03cdd14b81bb076, ext:408935608745, loc:(*time.Location)(0x74c3600)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc03cdd14b81bb076, ext:408935608745, loc:(*time.Location)(0x74c3600)}}, Count:1, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-59b45fb494-7qnv2.169a1249afed6476" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 10 22:25:26 addons-20210810221736-30291 kubelet[2803]: I0810 22:25:26.661016    2803 kubelet_pods.go:895] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Aug 10 22:25:32 addons-20210810221736-30291 kubelet[2803]: E0810 22:25:32.947416    2803 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-59b45fb494-7qnv2.169a1249afeb3221", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-59b45fb494-7qnv2", UID:"35d31ad6-a5a5-44fe-803b-2baa35eb6f04", APIVersion:"v1", ResourceVersion:"589", FieldPath:"spec.containers{controller}"}, Reason:"Unhealthy", Message:"Read
iness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"addons-20210810221736-30291"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc03cdd14b8197e21, ext:408935464647, loc:(*time.Location)(0x74c3600)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc03cdd17380ce639, ext:418934639420, loc:(*time.Location)(0x74c3600)}}, Count:2, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-59b45fb494-7qnv2.169a1249afeb3221" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 10 22:25:32 addons-20210810221736-30291 kubelet[2803]: E0810 22:25:32.952739    2803 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-59b45fb494-7qnv2.169a1249afed6476", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-59b45fb494-7qnv2", UID:"35d31ad6-a5a5-44fe-803b-2baa35eb6f04", APIVersion:"v1", ResourceVersion:"589", FieldPath:"spec.containers{controller}"}, Reason:"Unhealthy", Message:"Live
ness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"addons-20210810221736-30291"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc03cdd14b81bb076, ext:408935608745, loc:(*time.Location)(0x74c3600)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc03cdd173817aac8, ext:418935345053, loc:(*time.Location)(0x74c3600)}}, Count:2, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-59b45fb494-7qnv2.169a1249afed6476" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 10 22:25:33 addons-20210810221736-30291 kubelet[2803]: I0810 22:25:33.634416    2803 scope.go:111] "RemoveContainer" containerID="d7df4d7118133b9ed9f772d65f975a632f60980b0782615a18c2f716abba2b19"
	Aug 10 22:25:33 addons-20210810221736-30291 kubelet[2803]: I0810 22:25:33.683968    2803 scope.go:111] "RemoveContainer" containerID="d7df4d7118133b9ed9f772d65f975a632f60980b0782615a18c2f716abba2b19"
	Aug 10 22:25:33 addons-20210810221736-30291 kubelet[2803]: E0810 22:25:33.685307    2803 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7df4d7118133b9ed9f772d65f975a632f60980b0782615a18c2f716abba2b19\": container with ID starting with d7df4d7118133b9ed9f772d65f975a632f60980b0782615a18c2f716abba2b19 not found: ID does not exist" containerID="d7df4d7118133b9ed9f772d65f975a632f60980b0782615a18c2f716abba2b19"
	Aug 10 22:25:33 addons-20210810221736-30291 kubelet[2803]: I0810 22:25:33.685359    2803 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:d7df4d7118133b9ed9f772d65f975a632f60980b0782615a18c2f716abba2b19} err="failed to get container status \"d7df4d7118133b9ed9f772d65f975a632f60980b0782615a18c2f716abba2b19\": rpc error: code = NotFound desc = could not find container \"d7df4d7118133b9ed9f772d65f975a632f60980b0782615a18c2f716abba2b19\": container with ID starting with d7df4d7118133b9ed9f772d65f975a632f60980b0782615a18c2f716abba2b19 not found: ID does not exist"
	Aug 10 22:25:34 addons-20210810221736-30291 kubelet[2803]: I0810 22:25:34.775123    2803 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkpw6\" (UniqueName: \"kubernetes.io/projected/35d31ad6-a5a5-44fe-803b-2baa35eb6f04-kube-api-access-bkpw6\") pod \"35d31ad6-a5a5-44fe-803b-2baa35eb6f04\" (UID: \"35d31ad6-a5a5-44fe-803b-2baa35eb6f04\") "
	Aug 10 22:25:34 addons-20210810221736-30291 kubelet[2803]: I0810 22:25:34.775246    2803 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/35d31ad6-a5a5-44fe-803b-2baa35eb6f04-webhook-cert\") pod \"35d31ad6-a5a5-44fe-803b-2baa35eb6f04\" (UID: \"35d31ad6-a5a5-44fe-803b-2baa35eb6f04\") "
	Aug 10 22:25:34 addons-20210810221736-30291 kubelet[2803]: I0810 22:25:34.789451    2803 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35d31ad6-a5a5-44fe-803b-2baa35eb6f04-kube-api-access-bkpw6" (OuterVolumeSpecName: "kube-api-access-bkpw6") pod "35d31ad6-a5a5-44fe-803b-2baa35eb6f04" (UID: "35d31ad6-a5a5-44fe-803b-2baa35eb6f04"). InnerVolumeSpecName "kube-api-access-bkpw6". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 10 22:25:34 addons-20210810221736-30291 kubelet[2803]: I0810 22:25:34.789913    2803 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35d31ad6-a5a5-44fe-803b-2baa35eb6f04-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "35d31ad6-a5a5-44fe-803b-2baa35eb6f04" (UID: "35d31ad6-a5a5-44fe-803b-2baa35eb6f04"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 10 22:25:34 addons-20210810221736-30291 kubelet[2803]: I0810 22:25:34.875913    2803 reconciler.go:319] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/35d31ad6-a5a5-44fe-803b-2baa35eb6f04-webhook-cert\") on node \"addons-20210810221736-30291\" DevicePath \"\""
	Aug 10 22:25:34 addons-20210810221736-30291 kubelet[2803]: I0810 22:25:34.875955    2803 reconciler.go:319] "Volume detached for volume \"kube-api-access-bkpw6\" (UniqueName: \"kubernetes.io/projected/35d31ad6-a5a5-44fe-803b-2baa35eb6f04-kube-api-access-bkpw6\") on node \"addons-20210810221736-30291\" DevicePath \"\""
	Aug 10 22:25:40 addons-20210810221736-30291 kubelet[2803]: I0810 22:25:40.661008    2803 kubelet_pods.go:895] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/private-image-7ff9c8c74f-9v7sz" secret="" err="secret \"gcp-auth\" not found"
	Aug 10 22:25:42 addons-20210810221736-30291 kubelet[2803]: I0810 22:25:42.514676    2803 scope.go:111] "RemoveContainer" containerID="38277e038b6dc784d20cdd119c5a0d2b1139b732f78f63a4acd23cf4105f3bd8"
	Aug 10 22:25:42 addons-20210810221736-30291 kubelet[2803]: I0810 22:25:42.572449    2803 scope.go:111] "RemoveContainer" containerID="60b66dd9ebbafbf813338ea1af39df8c9561a07908f5a5d85096181c50a058c3"
	Aug 10 22:25:45 addons-20210810221736-30291 kubelet[2803]: I0810 22:25:45.661334    2803 kubelet_pods.go:895] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/private-image-eu-5956d58f9f-g6tz4" secret="" err="secret \"gcp-auth\" not found"
	
	* 
	* ==> storage-provisioner [c70c907c6b18c522de143dfb6f9d30e92dbd0696f2950131bcb7a5fb87dc7892] <==
	* I0810 22:19:07.779397       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0810 22:19:07.832851       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0810 22:19:07.833127       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0810 22:19:07.863070       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0810 22:19:07.869171       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-20210810221736-30291_78f77fbe-a6a3-47b6-a70c-24452d6ef9fc!
	I0810 22:19:07.875043       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a0ae41d2-c986-46b4-8300-6bda2761ba63", APIVersion:"v1", ResourceVersion:"892", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-20210810221736-30291_78f77fbe-a6a3-47b6-a70c-24452d6ef9fc became leader
	I0810 22:19:07.970782       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-20210810221736-30291_78f77fbe-a6a3-47b6-a70c-24452d6ef9fc!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-20210810221736-30291 -n addons-20210810221736-30291
helpers_test.go:262: (dbg) Run:  kubectl --context addons-20210810221736-30291 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:268: non-running pods: 
helpers_test.go:270: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:273: (dbg) Run:  kubectl --context addons-20210810221736-30291 describe pod 
helpers_test.go:273: (dbg) Non-zero exit: kubectl --context addons-20210810221736-30291 describe pod : exit status 1 (53.332992ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:275: kubectl --context addons-20210810221736-30291 describe pod : exit status 1
--- FAIL: TestAddons/parallel/Ingress (246.64s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (191.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:462: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210810223223-30291 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210810223223-30291 -- rollout status deployment/busybox
E0810 22:34:35.791831   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222707-30291/client.crt: no such file or directory
E0810 22:34:35.797140   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222707-30291/client.crt: no such file or directory
E0810 22:34:35.807409   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222707-30291/client.crt: no such file or directory
E0810 22:34:35.827669   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222707-30291/client.crt: no such file or directory
E0810 22:34:35.867928   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222707-30291/client.crt: no such file or directory
E0810 22:34:35.948270   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222707-30291/client.crt: no such file or directory
E0810 22:34:36.108941   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222707-30291/client.crt: no such file or directory
E0810 22:34:36.429322   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222707-30291/client.crt: no such file or directory
E0810 22:34:37.070256   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222707-30291/client.crt: no such file or directory
multinode_test.go:467: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-20210810223223-30291 -- rollout status deployment/busybox: (6.251069692s)
multinode_test.go:473: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210810223223-30291 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:485: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210810223223-30291 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210810223223-30291 -- exec busybox-84b6686758-5h7gq -- nslookup kubernetes.io
E0810 22:34:38.350739   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222707-30291/client.crt: no such file or directory
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210810223223-30291 -- exec busybox-84b6686758-nfzzk -- nslookup kubernetes.io
E0810 22:34:40.911018   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222707-30291/client.crt: no such file or directory
E0810 22:34:46.031285   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222707-30291/client.crt: no such file or directory
E0810 22:34:56.271575   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222707-30291/client.crt: no such file or directory
E0810 22:35:16.752078   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222707-30291/client.crt: no such file or directory
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-20210810223223-30291 -- exec busybox-84b6686758-nfzzk -- nslookup kubernetes.io: exit status 1 (1m0.301647527s)

                                                
                                                
-- stdout --
	Server:    10.96.0.10
	Address 1: 10.96.0.10
	

                                                
                                                
-- /stdout --
** stderr ** 
	nslookup: can't resolve 'kubernetes.io'
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:495: Pod busybox-84b6686758-nfzzk could not resolve 'kubernetes.io': exit status 1
multinode_test.go:503: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210810223223-30291 -- exec busybox-84b6686758-5h7gq -- nslookup kubernetes.default
multinode_test.go:503: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210810223223-30291 -- exec busybox-84b6686758-nfzzk -- nslookup kubernetes.default
E0810 22:35:57.713386   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222707-30291/client.crt: no such file or directory
E0810 22:36:13.065269   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810221736-30291/client.crt: no such file or directory
multinode_test.go:503: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-20210810223223-30291 -- exec busybox-84b6686758-nfzzk -- nslookup kubernetes.default: exit status 1 (1m0.309515595s)

                                                
                                                
-- stdout --
	Server:    10.96.0.10
	Address 1: 10.96.0.10
	

                                                
                                                
-- /stdout --
** stderr ** 
	nslookup: can't resolve 'kubernetes.default'
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:505: Pod busybox-84b6686758-nfzzk could not resolve 'kubernetes.default': exit status 1
multinode_test.go:511: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210810223223-30291 -- exec busybox-84b6686758-5h7gq -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:511: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210810223223-30291 -- exec busybox-84b6686758-nfzzk -- nslookup kubernetes.default.svc.cluster.local
E0810 22:37:19.633661   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222707-30291/client.crt: no such file or directory
multinode_test.go:511: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-20210810223223-30291 -- exec busybox-84b6686758-nfzzk -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (1m0.284636194s)

                                                
                                                
-- stdout --
	Server:    10.96.0.10
	Address 1: 10.96.0.10
	

                                                
                                                
-- /stdout --
** stderr ** 
	nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:513: Pod busybox-84b6686758-nfzzk could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-20210810223223-30291 -n multinode-20210810223223-30291
helpers_test.go:245: <<< TestMultiNode/serial/DeployApp2Nodes FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210810223223-30291 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210810223223-30291 logs -n 25: (1.279567014s)
helpers_test.go:253: TestMultiNode/serial/DeployApp2Nodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |-----------|---------------------------------------------------|----------------------------------------|----------|---------|-------------------------------|-------------------------------|
	|  Command  |                       Args                        |                Profile                 |   User   | Version |          Start Time           |           End Time            |
	|-----------|---------------------------------------------------|----------------------------------------|----------|---------|-------------------------------|-------------------------------|
	| -p        | functional-20210810222707-30291                   | functional-20210810222707-30291        | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:30:15 UTC | Tue, 10 Aug 2021 22:30:16 UTC |
	|           | version -o=json --components                      |                                        |          |         |                               |                               |
	| -p        | functional-20210810222707-30291                   | functional-20210810222707-30291        | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:30:16 UTC | Tue, 10 Aug 2021 22:30:16 UTC |
	|           | update-context                                    |                                        |          |         |                               |                               |
	|           | --alsologtostderr -v=2                            |                                        |          |         |                               |                               |
	| -p        | functional-20210810222707-30291                   | functional-20210810222707-30291        | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:30:17 UTC | Tue, 10 Aug 2021 22:30:17 UTC |
	|           | update-context                                    |                                        |          |         |                               |                               |
	|           | --alsologtostderr -v=2                            |                                        |          |         |                               |                               |
	| -p        | functional-20210810222707-30291                   | functional-20210810222707-30291        | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:30:17 UTC | Tue, 10 Aug 2021 22:30:17 UTC |
	|           | update-context                                    |                                        |          |         |                               |                               |
	|           | --alsologtostderr -v=2                            |                                        |          |         |                               |                               |
	| dashboard | --url --port 36195 -p                             | functional-20210810222707-30291        | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:30:11 UTC | Tue, 10 Aug 2021 22:30:17 UTC |
	|           | functional-20210810222707-30291                   |                                        |          |         |                               |                               |
	|           | --alsologtostderr -v=1                            |                                        |          |         |                               |                               |
	| -p        | functional-20210810222707-30291                   | functional-20210810222707-30291        | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:30:19 UTC | Tue, 10 Aug 2021 22:30:19 UTC |
	|           | ssh stat                                          |                                        |          |         |                               |                               |
	|           | /mount-9p/created-by-test                         |                                        |          |         |                               |                               |
	| -p        | functional-20210810222707-30291                   | functional-20210810222707-30291        | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:30:19 UTC | Tue, 10 Aug 2021 22:30:20 UTC |
	|           | ssh stat                                          |                                        |          |         |                               |                               |
	|           | /mount-9p/created-by-pod                          |                                        |          |         |                               |                               |
	| -p        | functional-20210810222707-30291                   | functional-20210810222707-30291        | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:30:20 UTC | Tue, 10 Aug 2021 22:30:20 UTC |
	|           | ssh sudo umount -f /mount-9p                      |                                        |          |         |                               |                               |
	| -p        | functional-20210810222707-30291                   | functional-20210810222707-30291        | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:30:21 UTC | Tue, 10 Aug 2021 22:30:21 UTC |
	|           | ssh findmnt -T /mount-9p | grep                   |                                        |          |         |                               |                               |
	|           | 9p                                                |                                        |          |         |                               |                               |
	| -p        | functional-20210810222707-30291                   | functional-20210810222707-30291        | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:30:21 UTC | Tue, 10 Aug 2021 22:30:21 UTC |
	|           | ssh -- ls -la /mount-9p                           |                                        |          |         |                               |                               |
	| delete    | -p                                                | functional-20210810222707-30291        | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:30:46 UTC | Tue, 10 Aug 2021 22:30:47 UTC |
	|           | functional-20210810222707-30291                   |                                        |          |         |                               |                               |
	| start     | -p                                                | json-output-20210810223047-30291       | testUser | v1.22.0 | Tue, 10 Aug 2021 22:30:47 UTC | Tue, 10 Aug 2021 22:32:12 UTC |
	|           | json-output-20210810223047-30291                  |                                        |          |         |                               |                               |
	|           | --output=json --user=testUser                     |                                        |          |         |                               |                               |
	|           | --memory=2200 --wait=true                         |                                        |          |         |                               |                               |
	|           | --driver=kvm2                                     |                                        |          |         |                               |                               |
	|           | --container-runtime=crio                          |                                        |          |         |                               |                               |
	| pause     | -p                                                | json-output-20210810223047-30291       | testUser | v1.22.0 | Tue, 10 Aug 2021 22:32:12 UTC | Tue, 10 Aug 2021 22:32:13 UTC |
	|           | json-output-20210810223047-30291                  |                                        |          |         |                               |                               |
	|           | --output=json --user=testUser                     |                                        |          |         |                               |                               |
	| unpause   | -p                                                | json-output-20210810223047-30291       | testUser | v1.22.0 | Tue, 10 Aug 2021 22:32:13 UTC | Tue, 10 Aug 2021 22:32:14 UTC |
	|           | json-output-20210810223047-30291                  |                                        |          |         |                               |                               |
	|           | --output=json --user=testUser                     |                                        |          |         |                               |                               |
	| stop      | -p                                                | json-output-20210810223047-30291       | testUser | v1.22.0 | Tue, 10 Aug 2021 22:32:14 UTC | Tue, 10 Aug 2021 22:32:22 UTC |
	|           | json-output-20210810223047-30291                  |                                        |          |         |                               |                               |
	|           | --output=json --user=testUser                     |                                        |          |         |                               |                               |
	| delete    | -p                                                | json-output-20210810223047-30291       | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:32:22 UTC | Tue, 10 Aug 2021 22:32:23 UTC |
	|           | json-output-20210810223047-30291                  |                                        |          |         |                               |                               |
	| delete    | -p                                                | json-output-error-20210810223223-30291 | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:32:23 UTC | Tue, 10 Aug 2021 22:32:23 UTC |
	|           | json-output-error-20210810223223-30291            |                                        |          |         |                               |                               |
	| start     | -p                                                | multinode-20210810223223-30291         | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:32:23 UTC | Tue, 10 Aug 2021 22:34:30 UTC |
	|           | multinode-20210810223223-30291                    |                                        |          |         |                               |                               |
	|           | --wait=true --memory=2200                         |                                        |          |         |                               |                               |
	|           | --nodes=2 -v=8                                    |                                        |          |         |                               |                               |
	|           | --alsologtostderr                                 |                                        |          |         |                               |                               |
	|           | --driver=kvm2                                     |                                        |          |         |                               |                               |
	|           | --container-runtime=crio                          |                                        |          |         |                               |                               |
	| kubectl   | -p multinode-20210810223223-30291 -- apply -f     | multinode-20210810223223-30291         | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:34:31 UTC | Tue, 10 Aug 2021 22:34:31 UTC |
	|           | ./testdata/multinodes/multinode-pod-dns-test.yaml |                                        |          |         |                               |                               |
	| kubectl   | -p                                                | multinode-20210810223223-30291         | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:34:31 UTC | Tue, 10 Aug 2021 22:34:37 UTC |
	|           | multinode-20210810223223-30291                    |                                        |          |         |                               |                               |
	|           | -- rollout status                                 |                                        |          |         |                               |                               |
	|           | deployment/busybox                                |                                        |          |         |                               |                               |
	| kubectl   | -p multinode-20210810223223-30291                 | multinode-20210810223223-30291         | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:34:38 UTC | Tue, 10 Aug 2021 22:34:38 UTC |
	|           | -- get pods -o                                    |                                        |          |         |                               |                               |
	|           | jsonpath='{.items[*].status.podIP}'               |                                        |          |         |                               |                               |
	| kubectl   | -p multinode-20210810223223-30291                 | multinode-20210810223223-30291         | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:34:38 UTC | Tue, 10 Aug 2021 22:34:38 UTC |
	|           | -- get pods -o                                    |                                        |          |         |                               |                               |
	|           | jsonpath='{.items[*].metadata.name}'              |                                        |          |         |                               |                               |
	| kubectl   | -p                                                | multinode-20210810223223-30291         | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:34:38 UTC | Tue, 10 Aug 2021 22:34:38 UTC |
	|           | multinode-20210810223223-30291                    |                                        |          |         |                               |                               |
	|           | -- exec                                           |                                        |          |         |                               |                               |
	|           | busybox-84b6686758-5h7gq --                       |                                        |          |         |                               |                               |
	|           | nslookup kubernetes.io                            |                                        |          |         |                               |                               |
	| kubectl   | -p                                                | multinode-20210810223223-30291         | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:35:39 UTC | Tue, 10 Aug 2021 22:35:39 UTC |
	|           | multinode-20210810223223-30291                    |                                        |          |         |                               |                               |
	|           | -- exec                                           |                                        |          |         |                               |                               |
	|           | busybox-84b6686758-5h7gq --                       |                                        |          |         |                               |                               |
	|           | nslookup kubernetes.default                       |                                        |          |         |                               |                               |
	| kubectl   | -p multinode-20210810223223-30291                 | multinode-20210810223223-30291         | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:36:39 UTC | Tue, 10 Aug 2021 22:36:39 UTC |
	|           | -- exec busybox-84b6686758-5h7gq                  |                                        |          |         |                               |                               |
	|           | -- nslookup                                       |                                        |          |         |                               |                               |
	|           | kubernetes.default.svc.cluster.local              |                                        |          |         |                               |                               |
	|-----------|---------------------------------------------------|----------------------------------------|----------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/10 22:32:23
	Running on machine: debian-jenkins-agent-3
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0810 22:32:23.642564    4347 out.go:298] Setting OutFile to fd 1 ...
	I0810 22:32:23.642648    4347 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:32:23.642679    4347 out.go:311] Setting ErrFile to fd 2...
	I0810 22:32:23.642682    4347 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:32:23.642797    4347 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin
	I0810 22:32:23.643088    4347 out.go:305] Setting JSON to false
	I0810 22:32:23.678453    4347 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-3","uptime":8104,"bootTime":1628626640,"procs":153,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0810 22:32:23.678565    4347 start.go:121] virtualization: kvm guest
	I0810 22:32:23.681051    4347 out.go:177] * [multinode-20210810223223-30291] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0810 22:32:23.682514    4347 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0810 22:32:23.681216    4347 notify.go:169] Checking for updates...
	I0810 22:32:23.684022    4347 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0810 22:32:23.685360    4347 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	I0810 22:32:23.686753    4347 out.go:177]   - MINIKUBE_LOCATION=12230
	I0810 22:32:23.686947    4347 driver.go:335] Setting default libvirt URI to qemu:///system
	I0810 22:32:23.716577    4347 out.go:177] * Using the kvm2 driver based on user configuration
	I0810 22:32:23.716602    4347 start.go:278] selected driver: kvm2
	I0810 22:32:23.716608    4347 start.go:751] validating driver "kvm2" against <nil>
	I0810 22:32:23.716625    4347 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0810 22:32:23.717725    4347 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:32:23.717883    4347 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0810 22:32:23.728536    4347 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0810 22:32:23.728591    4347 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0810 22:32:23.728763    4347 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0810 22:32:23.728787    4347 cni.go:93] Creating CNI manager for ""
	I0810 22:32:23.728792    4347 cni.go:154] 0 nodes found, recommending kindnet
	I0810 22:32:23.728797    4347 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0810 22:32:23.728805    4347 start_flags.go:277] config:
	{Name:multinode-20210810223223-30291 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:multinode-20210810223223-30291 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0}
	I0810 22:32:23.728921    4347 iso.go:123] acquiring lock: {Name:mke8829815ca14456120fefc524d0a056bf82da0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:32:23.730742    4347 out.go:177] * Starting control plane node multinode-20210810223223-30291 in cluster multinode-20210810223223-30291
	I0810 22:32:23.730775    4347 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0810 22:32:23.730811    4347 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4
	I0810 22:32:23.730841    4347 cache.go:56] Caching tarball of preloaded images
	I0810 22:32:23.730956    4347 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0810 22:32:23.730986    4347 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on crio
	I0810 22:32:23.731372    4347 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/config.json ...
	I0810 22:32:23.731405    4347 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/config.json: {Name:mka5062e6b69c2d8df20f3df3953506ad4b5dcbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:32:23.731560    4347 cache.go:205] Successfully downloaded all kic artifacts
	I0810 22:32:23.731610    4347 start.go:313] acquiring machines lock for multinode-20210810223223-30291: {Name:mk9647f7c84b24381af0d3e731fd883065efc3b8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0810 22:32:23.731679    4347 start.go:317] acquired machines lock for "multinode-20210810223223-30291" in 43.504µs
	I0810 22:32:23.731711    4347 start.go:89] Provisioning new machine with config: &{Name:multinode-20210810223223-30291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12122/minikube-v1.22.0-1628238775-12122.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 C
lusterName:multinode-20210810223223-30291 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0810 22:32:23.731793    4347 start.go:126] createHost starting for "" (driver="kvm2")
	I0810 22:32:23.733836    4347 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0810 22:32:23.733948    4347 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:32:23.733987    4347 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:32:23.743753    4347 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44777
	I0810 22:32:23.744195    4347 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:32:23.744720    4347 main.go:130] libmachine: Using API Version  1
	I0810 22:32:23.744760    4347 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:32:23.745072    4347 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:32:23.745242    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetMachineName
	I0810 22:32:23.745380    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .DriverName
	I0810 22:32:23.745533    4347 start.go:160] libmachine.API.Create for "multinode-20210810223223-30291" (driver="kvm2")
	I0810 22:32:23.745567    4347 client.go:168] LocalClient.Create starting
	I0810 22:32:23.745604    4347 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem
	I0810 22:32:23.745632    4347 main.go:130] libmachine: Decoding PEM data...
	I0810 22:32:23.745654    4347 main.go:130] libmachine: Parsing certificate...
	I0810 22:32:23.745814    4347 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem
	I0810 22:32:23.745839    4347 main.go:130] libmachine: Decoding PEM data...
	I0810 22:32:23.745863    4347 main.go:130] libmachine: Parsing certificate...
	I0810 22:32:23.745921    4347 main.go:130] libmachine: Running pre-create checks...
	I0810 22:32:23.745934    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .PreCreateCheck
	I0810 22:32:23.746288    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetConfigRaw
	I0810 22:32:23.746660    4347 main.go:130] libmachine: Creating machine...
	I0810 22:32:23.746679    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .Create
	I0810 22:32:23.746779    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Creating KVM machine...
	I0810 22:32:23.749187    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | found existing default KVM network
	I0810 22:32:23.750119    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | I0810 22:32:23.749981    4371 network.go:240] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:b4:06:d9}}
	I0810 22:32:23.750960    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | I0810 22:32:23.750886    4371 network.go:288] reserving subnet 192.168.50.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.50.0:0xc0000a05e0] misses:0}
	I0810 22:32:23.750994    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | I0810 22:32:23.750929    4371 network.go:235] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0810 22:32:23.778684    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | trying to create private KVM network mk-multinode-20210810223223-30291 192.168.50.0/24...
	I0810 22:32:24.044571    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | private KVM network mk-multinode-20210810223223-30291 192.168.50.0/24 created
	I0810 22:32:24.044622    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Setting up store path in /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223223-30291 ...
	I0810 22:32:24.044644    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | I0810 22:32:24.044521    4371 common.go:108] Making disk image using store path: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	I0810 22:32:24.044669    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Building disk image from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/iso/minikube-v1.22.0-1628238775-12122.iso
	I0810 22:32:24.044705    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Downloading /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/iso/minikube-v1.22.0-1628238775-12122.iso...
	I0810 22:32:24.253986    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | I0810 22:32:24.253861    4371 common.go:115] Creating ssh key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223223-30291/id_rsa...
	I0810 22:32:24.632302    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | I0810 22:32:24.632164    4371 common.go:121] Creating raw disk image: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223223-30291/multinode-20210810223223-30291.rawdisk...
	I0810 22:32:24.632334    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | Writing magic tar header
	I0810 22:32:24.632352    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | Writing SSH key tar header
	I0810 22:32:24.632366    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | I0810 22:32:24.632275    4371 common.go:135] Fixing permissions on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223223-30291 ...
	I0810 22:32:24.632388    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223223-30291
	I0810 22:32:24.632407    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines
	I0810 22:32:24.632418    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	I0810 22:32:24.632436    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223223-30291 (perms=drwx------)
	I0810 22:32:24.632455    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0
	I0810 22:32:24.632471    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines (perms=drwxr-xr-x)
	I0810 22:32:24.632499    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube (perms=drwxr-xr-x)
	I0810 22:32:24.632520    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0 (perms=drwxr-xr-x)
	I0810 22:32:24.632536    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0810 22:32:24.632555    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | Checking permissions on dir: /home/jenkins
	I0810 22:32:24.632564    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | Checking permissions on dir: /home
	I0810 22:32:24.632574    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | Skipping /home - not owner
	I0810 22:32:24.632613    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxr-xr-x)
	I0810 22:32:24.632634    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0810 22:32:24.632643    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Creating domain...
	I0810 22:32:24.657516    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:67:0e:c9 in network default
	I0810 22:32:24.657949    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:24.657965    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Ensuring networks are active...
	I0810 22:32:24.659918    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Ensuring network default is active
	I0810 22:32:24.660178    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Ensuring network mk-multinode-20210810223223-30291 is active
	I0810 22:32:24.660673    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Getting domain xml...
	I0810 22:32:24.662584    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Creating domain...
	I0810 22:32:25.061208    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Waiting to get IP...
	I0810 22:32:25.062227    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:25.062647    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | unable to find current IP address of domain multinode-20210810223223-30291 in network mk-multinode-20210810223223-30291
	I0810 22:32:25.062677    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | I0810 22:32:25.062606    4371 retry.go:31] will retry after 263.082536ms: waiting for machine to come up
	I0810 22:32:25.327001    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:25.327503    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | unable to find current IP address of domain multinode-20210810223223-30291 in network mk-multinode-20210810223223-30291
	I0810 22:32:25.327552    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | I0810 22:32:25.327457    4371 retry.go:31] will retry after 381.329545ms: waiting for machine to come up
	I0810 22:32:25.710117    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:25.710642    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | unable to find current IP address of domain multinode-20210810223223-30291 in network mk-multinode-20210810223223-30291
	I0810 22:32:25.710666    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | I0810 22:32:25.710598    4371 retry.go:31] will retry after 422.765636ms: waiting for machine to come up
	I0810 22:32:26.135175    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:26.135649    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | unable to find current IP address of domain multinode-20210810223223-30291 in network mk-multinode-20210810223223-30291
	I0810 22:32:26.135679    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | I0810 22:32:26.135601    4371 retry.go:31] will retry after 473.074753ms: waiting for machine to come up
	I0810 22:32:26.609721    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:26.610222    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | unable to find current IP address of domain multinode-20210810223223-30291 in network mk-multinode-20210810223223-30291
	I0810 22:32:26.610270    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | I0810 22:32:26.610183    4371 retry.go:31] will retry after 587.352751ms: waiting for machine to come up
	I0810 22:32:27.198598    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:27.199034    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | unable to find current IP address of domain multinode-20210810223223-30291 in network mk-multinode-20210810223223-30291
	I0810 22:32:27.199064    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | I0810 22:32:27.198988    4371 retry.go:31] will retry after 834.206799ms: waiting for machine to come up
	I0810 22:32:28.034899    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:28.035375    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | unable to find current IP address of domain multinode-20210810223223-30291 in network mk-multinode-20210810223223-30291
	I0810 22:32:28.035403    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | I0810 22:32:28.035332    4371 retry.go:31] will retry after 746.553905ms: waiting for machine to come up
	I0810 22:32:28.783767    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:28.784218    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | unable to find current IP address of domain multinode-20210810223223-30291 in network mk-multinode-20210810223223-30291
	I0810 22:32:28.784247    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | I0810 22:32:28.784180    4371 retry.go:31] will retry after 987.362415ms: waiting for machine to come up
	I0810 22:32:29.773313    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:29.773753    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | unable to find current IP address of domain multinode-20210810223223-30291 in network mk-multinode-20210810223223-30291
	I0810 22:32:29.773784    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | I0810 22:32:29.773699    4371 retry.go:31] will retry after 1.189835008s: waiting for machine to come up
	I0810 22:32:30.964875    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:30.965436    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | unable to find current IP address of domain multinode-20210810223223-30291 in network mk-multinode-20210810223223-30291
	I0810 22:32:30.965466    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | I0810 22:32:30.965382    4371 retry.go:31] will retry after 1.677229867s: waiting for machine to come up
	I0810 22:32:32.643666    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:32.644156    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | unable to find current IP address of domain multinode-20210810223223-30291 in network mk-multinode-20210810223223-30291
	I0810 22:32:32.644191    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | I0810 22:32:32.644079    4371 retry.go:31] will retry after 2.346016261s: waiting for machine to come up
	I0810 22:32:34.992160    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:34.992686    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | unable to find current IP address of domain multinode-20210810223223-30291 in network mk-multinode-20210810223223-30291
	I0810 22:32:34.992713    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | I0810 22:32:34.992658    4371 retry.go:31] will retry after 3.36678925s: waiting for machine to come up
	I0810 22:32:38.361097    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:38.361595    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Found IP for machine: 192.168.50.32
	I0810 22:32:38.361618    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Reserving static IP address...
	I0810 22:32:38.361631    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has current primary IP address 192.168.50.32 and MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:38.362042    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | unable to find host DHCP lease matching {name: "multinode-20210810223223-30291", mac: "52:54:00:ce:d8:89", ip: "192.168.50.32"} in network mk-multinode-20210810223223-30291
	I0810 22:32:38.408232    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Reserved static IP address: 192.168.50.32
	I0810 22:32:38.408261    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Waiting for SSH to be available...
	I0810 22:32:38.408283    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | Getting to WaitForSSH function...
	I0810 22:32:38.414513    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:38.414911    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:89", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:32:38 +0000 UTC Type:0 Mac:52:54:00:ce:d8:89 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ce:d8:89}
	I0810 22:32:38.414940    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined IP address 192.168.50.32 and MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:38.415095    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | Using SSH client type: external
	I0810 22:32:38.415120    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223223-30291/id_rsa (-rw-------)
	I0810 22:32:38.415155    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.32 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223223-30291/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0810 22:32:38.415167    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | About to run SSH command:
	I0810 22:32:38.415206    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | exit 0
	I0810 22:32:38.567481    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | SSH cmd err, output: <nil>: 
	I0810 22:32:38.567975    4347 main.go:130] libmachine: (multinode-20210810223223-30291) KVM machine creation complete!
	I0810 22:32:38.568047    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetConfigRaw
	I0810 22:32:38.568706    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .DriverName
	I0810 22:32:38.568919    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .DriverName
	I0810 22:32:38.569094    4347 main.go:130] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0810 22:32:38.569114    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetState
	I0810 22:32:38.571674    4347 main.go:130] libmachine: Detecting operating system of created instance...
	I0810 22:32:38.571689    4347 main.go:130] libmachine: Waiting for SSH to be available...
	I0810 22:32:38.571696    4347 main.go:130] libmachine: Getting to WaitForSSH function...
	I0810 22:32:38.571706    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHHostname
	I0810 22:32:38.576433    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:38.576718    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:89", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:32:38 +0000 UTC Type:0 Mac:52:54:00:ce:d8:89 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:multinode-20210810223223-30291 Clientid:01:52:54:00:ce:d8:89}
	I0810 22:32:38.576750    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined IP address 192.168.50.32 and MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:38.576945    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHPort
	I0810 22:32:38.577117    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHKeyPath
	I0810 22:32:38.577263    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHKeyPath
	I0810 22:32:38.577414    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHUsername
	I0810 22:32:38.577575    4347 main.go:130] libmachine: Using SSH client type: native
	I0810 22:32:38.577853    4347 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0810 22:32:38.577871    4347 main.go:130] libmachine: About to run SSH command:
	exit 0
	I0810 22:32:38.687548    4347 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0810 22:32:38.875986    4347 main.go:130] libmachine: Detecting the provisioner...
	I0810 22:32:38.876009    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHHostname
	I0810 22:32:38.881035    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:38.881355    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:89", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:32:38 +0000 UTC Type:0 Mac:52:54:00:ce:d8:89 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:multinode-20210810223223-30291 Clientid:01:52:54:00:ce:d8:89}
	I0810 22:32:38.881387    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined IP address 192.168.50.32 and MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:38.881491    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHPort
	I0810 22:32:38.881675    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHKeyPath
	I0810 22:32:38.881835    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHKeyPath
	I0810 22:32:38.881958    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHUsername
	I0810 22:32:38.882095    4347 main.go:130] libmachine: Using SSH client type: native
	I0810 22:32:38.882264    4347 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0810 22:32:38.882278    4347 main.go:130] libmachine: About to run SSH command:
	cat /etc/os-release
	I0810 22:32:38.992654    4347 main.go:130] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2020.02.12
	ID=buildroot
	VERSION_ID=2020.02.12
	PRETTY_NAME="Buildroot 2020.02.12"
	
	I0810 22:32:38.992768    4347 main.go:130] libmachine: found compatible host: buildroot
	I0810 22:32:38.992784    4347 main.go:130] libmachine: Provisioning with buildroot...
	I0810 22:32:38.992796    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetMachineName
	I0810 22:32:38.993055    4347 buildroot.go:166] provisioning hostname "multinode-20210810223223-30291"
	I0810 22:32:38.993084    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetMachineName
	I0810 22:32:38.993282    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHHostname
	I0810 22:32:38.998123    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:38.998402    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:89", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:32:38 +0000 UTC Type:0 Mac:52:54:00:ce:d8:89 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:multinode-20210810223223-30291 Clientid:01:52:54:00:ce:d8:89}
	I0810 22:32:38.998451    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined IP address 192.168.50.32 and MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:38.998562    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHPort
	I0810 22:32:38.998734    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHKeyPath
	I0810 22:32:38.998865    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHKeyPath
	I0810 22:32:38.999026    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHUsername
	I0810 22:32:38.999208    4347 main.go:130] libmachine: Using SSH client type: native
	I0810 22:32:38.999353    4347 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0810 22:32:38.999367    4347 main.go:130] libmachine: About to run SSH command:
	sudo hostname multinode-20210810223223-30291 && echo "multinode-20210810223223-30291" | sudo tee /etc/hostname
	I0810 22:32:39.116794    4347 main.go:130] libmachine: SSH cmd err, output: <nil>: multinode-20210810223223-30291
	
	I0810 22:32:39.116835    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHHostname
	I0810 22:32:39.122121    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:39.122423    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:89", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:32:38 +0000 UTC Type:0 Mac:52:54:00:ce:d8:89 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:multinode-20210810223223-30291 Clientid:01:52:54:00:ce:d8:89}
	I0810 22:32:39.122461    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined IP address 192.168.50.32 and MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:39.122570    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHPort
	I0810 22:32:39.122760    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHKeyPath
	I0810 22:32:39.122917    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHKeyPath
	I0810 22:32:39.123059    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHUsername
	I0810 22:32:39.123252    4347 main.go:130] libmachine: Using SSH client type: native
	I0810 22:32:39.123425    4347 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0810 22:32:39.123450    4347 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-20210810223223-30291' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-20210810223223-30291/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-20210810223223-30291' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0810 22:32:39.239162    4347 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0810 22:32:39.239207    4347 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem ServerCertRemotePath:/etc/
docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube}
	I0810 22:32:39.239264    4347 buildroot.go:174] setting up certificates
	I0810 22:32:39.239277    4347 provision.go:83] configureAuth start
	I0810 22:32:39.239293    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetMachineName
	I0810 22:32:39.239588    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetIP
	I0810 22:32:39.244650    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:39.244943    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:89", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:32:38 +0000 UTC Type:0 Mac:52:54:00:ce:d8:89 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:multinode-20210810223223-30291 Clientid:01:52:54:00:ce:d8:89}
	I0810 22:32:39.244982    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined IP address 192.168.50.32 and MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:39.245047    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHHostname
	I0810 22:32:39.249030    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:39.249296    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:89", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:32:38 +0000 UTC Type:0 Mac:52:54:00:ce:d8:89 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:multinode-20210810223223-30291 Clientid:01:52:54:00:ce:d8:89}
	I0810 22:32:39.249320    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined IP address 192.168.50.32 and MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:39.249427    4347 provision.go:137] copyHostCerts
	I0810 22:32:39.249459    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem
	I0810 22:32:39.249497    4347 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem, removing ...
	I0810 22:32:39.249520    4347 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem
	I0810 22:32:39.249578    4347 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem (1082 bytes)
	I0810 22:32:39.249646    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem
	I0810 22:32:39.249665    4347 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem, removing ...
	I0810 22:32:39.249672    4347 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem
	I0810 22:32:39.249694    4347 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem (1123 bytes)
	I0810 22:32:39.249730    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem
	I0810 22:32:39.249746    4347 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem, removing ...
	I0810 22:32:39.249753    4347 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem
	I0810 22:32:39.249769    4347 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem (1679 bytes)
	I0810 22:32:39.249808    4347 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem org=jenkins.multinode-20210810223223-30291 san=[192.168.50.32 192.168.50.32 localhost 127.0.0.1 minikube multinode-20210810223223-30291]
	I0810 22:32:39.467963    4347 provision.go:171] copyRemoteCerts
	I0810 22:32:39.468023    4347 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0810 22:32:39.468053    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHHostname
	I0810 22:32:39.473286    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:39.473570    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:89", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:32:38 +0000 UTC Type:0 Mac:52:54:00:ce:d8:89 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:multinode-20210810223223-30291 Clientid:01:52:54:00:ce:d8:89}
	I0810 22:32:39.473595    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined IP address 192.168.50.32 and MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:39.473746    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHPort
	I0810 22:32:39.473942    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHKeyPath
	I0810 22:32:39.474074    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHUsername
	I0810 22:32:39.474183    4347 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223223-30291/id_rsa Username:docker}
	I0810 22:32:39.555442    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0810 22:32:39.555506    4347 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0810 22:32:39.572241    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0810 22:32:39.572317    4347 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem --> /etc/docker/server.pem (1261 bytes)
	I0810 22:32:39.588495    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0810 22:32:39.588544    4347 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0810 22:32:39.604369    4347 provision.go:86] duration metric: configureAuth took 365.078335ms
	I0810 22:32:39.604394    4347 buildroot.go:189] setting minikube options for container-runtime
	I0810 22:32:39.604669    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHHostname
	I0810 22:32:39.610136    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:39.610474    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:89", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:32:38 +0000 UTC Type:0 Mac:52:54:00:ce:d8:89 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:multinode-20210810223223-30291 Clientid:01:52:54:00:ce:d8:89}
	I0810 22:32:39.610523    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined IP address 192.168.50.32 and MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:39.610637    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHPort
	I0810 22:32:39.610859    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHKeyPath
	I0810 22:32:39.611024    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHKeyPath
	I0810 22:32:39.611190    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHUsername
	I0810 22:32:39.611341    4347 main.go:130] libmachine: Using SSH client type: native
	I0810 22:32:39.611480    4347 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0810 22:32:39.611494    4347 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0810 22:32:40.330506    4347 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0810 22:32:40.330536    4347 main.go:130] libmachine: Checking connection to Docker...
	I0810 22:32:40.330544    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetURL
	I0810 22:32:40.333397    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | Using libvirt version 3000000
	I0810 22:32:40.337733    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:40.338027    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:89", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:32:38 +0000 UTC Type:0 Mac:52:54:00:ce:d8:89 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:multinode-20210810223223-30291 Clientid:01:52:54:00:ce:d8:89}
	I0810 22:32:40.338052    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined IP address 192.168.50.32 and MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:40.338195    4347 main.go:130] libmachine: Docker is up and running!
	I0810 22:32:40.338211    4347 main.go:130] libmachine: Reticulating splines...
	I0810 22:32:40.338220    4347 client.go:171] LocalClient.Create took 16.592642463s
	I0810 22:32:40.338240    4347 start.go:168] duration metric: libmachine.API.Create for "multinode-20210810223223-30291" took 16.592708779s
	I0810 22:32:40.338252    4347 start.go:267] post-start starting for "multinode-20210810223223-30291" (driver="kvm2")
	I0810 22:32:40.338260    4347 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0810 22:32:40.338278    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .DriverName
	I0810 22:32:40.338513    4347 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0810 22:32:40.338547    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHHostname
	I0810 22:32:40.342637    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:40.342919    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:89", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:32:38 +0000 UTC Type:0 Mac:52:54:00:ce:d8:89 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:multinode-20210810223223-30291 Clientid:01:52:54:00:ce:d8:89}
	I0810 22:32:40.342950    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined IP address 192.168.50.32 and MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:40.343050    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHPort
	I0810 22:32:40.343223    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHKeyPath
	I0810 22:32:40.343349    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHUsername
	I0810 22:32:40.343473    4347 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223223-30291/id_rsa Username:docker}
	I0810 22:32:40.427667    4347 ssh_runner.go:149] Run: cat /etc/os-release
	I0810 22:32:40.432262    4347 command_runner.go:124] > NAME=Buildroot
	I0810 22:32:40.432281    4347 command_runner.go:124] > VERSION=2020.02.12
	I0810 22:32:40.432286    4347 command_runner.go:124] > ID=buildroot
	I0810 22:32:40.432291    4347 command_runner.go:124] > VERSION_ID=2020.02.12
	I0810 22:32:40.432296    4347 command_runner.go:124] > PRETTY_NAME="Buildroot 2020.02.12"
	I0810 22:32:40.432320    4347 info.go:137] Remote host: Buildroot 2020.02.12
	I0810 22:32:40.432332    4347 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/addons for local assets ...
	I0810 22:32:40.432388    4347 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files for local assets ...
	I0810 22:32:40.432517    4347 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/302912.pem -> 302912.pem in /etc/ssl/certs
	I0810 22:32:40.432532    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/302912.pem -> /etc/ssl/certs/302912.pem
	I0810 22:32:40.432637    4347 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0810 22:32:40.439152    4347 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/302912.pem --> /etc/ssl/certs/302912.pem (1708 bytes)
	I0810 22:32:40.455474    4347 start.go:270] post-start completed in 117.207443ms
	I0810 22:32:40.455530    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetConfigRaw
	I0810 22:32:40.456189    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetIP
	I0810 22:32:40.461322    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:40.461613    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:89", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:32:38 +0000 UTC Type:0 Mac:52:54:00:ce:d8:89 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:multinode-20210810223223-30291 Clientid:01:52:54:00:ce:d8:89}
	I0810 22:32:40.461647    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined IP address 192.168.50.32 and MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:40.461845    4347 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/config.json ...
	I0810 22:32:40.462059    4347 start.go:129] duration metric: createHost completed in 16.730255053s
	I0810 22:32:40.462072    4347 start.go:80] releasing machines lock for "multinode-20210810223223-30291", held for 16.730377621s
	I0810 22:32:40.462110    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .DriverName
	I0810 22:32:40.462318    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetIP
	I0810 22:32:40.466485    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:40.466754    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:89", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:32:38 +0000 UTC Type:0 Mac:52:54:00:ce:d8:89 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:multinode-20210810223223-30291 Clientid:01:52:54:00:ce:d8:89}
	I0810 22:32:40.466784    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined IP address 192.168.50.32 and MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:40.466869    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .DriverName
	I0810 22:32:40.467033    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .DriverName
	I0810 22:32:40.467493    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .DriverName
	I0810 22:32:40.467673    4347 ssh_runner.go:149] Run: systemctl --version
	I0810 22:32:40.467695    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHHostname
	I0810 22:32:40.467734    4347 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0810 22:32:40.467780    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHHostname
	I0810 22:32:40.472281    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:40.472559    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:89", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:32:38 +0000 UTC Type:0 Mac:52:54:00:ce:d8:89 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:multinode-20210810223223-30291 Clientid:01:52:54:00:ce:d8:89}
	I0810 22:32:40.472592    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined IP address 192.168.50.32 and MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:40.472682    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHPort
	I0810 22:32:40.472861    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHKeyPath
	I0810 22:32:40.472991    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHUsername
	I0810 22:32:40.473104    4347 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223223-30291/id_rsa Username:docker}
	I0810 22:32:40.473367    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:40.473722    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:89", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:32:38 +0000 UTC Type:0 Mac:52:54:00:ce:d8:89 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:multinode-20210810223223-30291 Clientid:01:52:54:00:ce:d8:89}
	I0810 22:32:40.473751    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined IP address 192.168.50.32 and MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:40.473902    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHPort
	I0810 22:32:40.474067    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHKeyPath
	I0810 22:32:40.474208    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHUsername
	I0810 22:32:40.474330    4347 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223223-30291/id_rsa Username:docker}
	I0810 22:32:40.562649    4347 command_runner.go:124] > <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
	I0810 22:32:40.562674    4347 command_runner.go:124] > <TITLE>302 Moved</TITLE></HEAD><BODY>
	I0810 22:32:40.562679    4347 command_runner.go:124] > <H1>302 Moved</H1>
	I0810 22:32:40.562683    4347 command_runner.go:124] > The document has moved
	I0810 22:32:40.562689    4347 command_runner.go:124] > <A HREF="https://cloud.google.com/container-registry/">here</A>.
	I0810 22:32:40.562693    4347 command_runner.go:124] > </BODY></HTML>
	I0810 22:32:40.563259    4347 command_runner.go:124] > systemd 244 (244)
	I0810 22:32:40.563292    4347 command_runner.go:124] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK +SYSVINIT +UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0810 22:32:40.563336    4347 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0810 22:32:40.563456    4347 ssh_runner.go:149] Run: sudo crictl images --output json
	I0810 22:32:40.586443    4347 command_runner.go:124] ! time="2021-08-10T22:32:40Z" level=warning msg="image connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	I0810 22:32:42.568998    4347 command_runner.go:124] ! time="2021-08-10T22:32:42Z" level=error msg="connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	I0810 22:32:44.556916    4347 command_runner.go:124] ! time="2021-08-10T22:32:44Z" level=error msg="connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	I0810 22:32:44.561313    4347 command_runner.go:124] > {
	I0810 22:32:44.561332    4347 command_runner.go:124] >   "images": [
	I0810 22:32:44.561337    4347 command_runner.go:124] >   ]
	I0810 22:32:44.561342    4347 command_runner.go:124] > }
	I0810 22:32:44.561562    4347 ssh_runner.go:189] Completed: sudo crictl images --output json: (3.998071829s)
	I0810 22:32:44.561692    4347 crio.go:420] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.21.3". assuming images are not preloaded.
	I0810 22:32:44.561754    4347 ssh_runner.go:149] Run: which lz4
	I0810 22:32:44.565778    4347 command_runner.go:124] > /bin/lz4
	I0810 22:32:44.565804    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0810 22:32:44.565879    4347 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0810 22:32:44.570036    4347 command_runner.go:124] ! stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0810 22:32:44.570401    4347 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0810 22:32:44.570426    4347 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (576184326 bytes)
	I0810 22:32:47.905613    4347 crio.go:362] Took 3.339764 seconds to copy over tarball
	I0810 22:32:47.905754    4347 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0810 22:32:52.596739    4347 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (4.69094083s)
	I0810 22:32:52.596771    4347 crio.go:369] Took 4.691121 seconds t extract the tarball
	I0810 22:32:52.596783    4347 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0810 22:32:52.635335    4347 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0810 22:32:52.647855    4347 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0810 22:32:52.657276    4347 docker.go:153] disabling docker service ...
	I0810 22:32:52.657334    4347 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0810 22:32:52.667758    4347 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0810 22:32:52.676016    4347 command_runner.go:124] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0810 22:32:52.676714    4347 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0810 22:32:52.685844    4347 command_runner.go:124] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0810 22:32:52.852034    4347 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0810 22:32:52.862179    4347 command_runner.go:124] ! Unit docker.service does not exist, proceeding anyway.
	I0810 22:32:52.862670    4347 command_runner.go:124] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0810 22:32:52.992489    4347 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0810 22:32:53.003425    4347 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0810 22:32:53.017245    4347 command_runner.go:124] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0810 22:32:53.017271    4347 command_runner.go:124] > image-endpoint: unix:///var/run/crio/crio.sock
	I0810 22:32:53.017304    4347 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0810 22:32:53.024802    4347 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0810 22:32:53.024841    4347 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0810 22:32:53.032539    4347 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0810 22:32:53.038986    4347 command_runner.go:124] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0810 22:32:53.039360    4347 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0810 22:32:53.039419    4347 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0810 22:32:53.055322    4347 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0810 22:32:53.062095    4347 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0810 22:32:53.193612    4347 ssh_runner.go:149] Run: sudo systemctl start crio
	I0810 22:32:53.302005    4347 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0810 22:32:53.302098    4347 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0810 22:32:53.307766    4347 command_runner.go:124] >   File: /var/run/crio/crio.sock
	I0810 22:32:53.307792    4347 command_runner.go:124] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0810 22:32:53.307803    4347 command_runner.go:124] > Device: 14h/20d	Inode: 29710       Links: 1
	I0810 22:32:53.307813    4347 command_runner.go:124] > Access: (0755/srwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0810 22:32:53.307821    4347 command_runner.go:124] > Access: 2021-08-10 22:32:44.505329266 +0000
	I0810 22:32:53.307830    4347 command_runner.go:124] > Modify: 2021-08-10 22:32:40.217909913 +0000
	I0810 22:32:53.307837    4347 command_runner.go:124] > Change: 2021-08-10 22:32:40.217909913 +0000
	I0810 22:32:53.307843    4347 command_runner.go:124] >  Birth: -
	I0810 22:32:53.307897    4347 start.go:417] Will wait 60s for crictl version
	I0810 22:32:53.307955    4347 ssh_runner.go:149] Run: sudo crictl version
	I0810 22:32:53.338312    4347 command_runner.go:124] > Version:  0.1.0
	I0810 22:32:53.338337    4347 command_runner.go:124] > RuntimeName:  cri-o
	I0810 22:32:53.338343    4347 command_runner.go:124] > RuntimeVersion:  1.20.2
	I0810 22:32:53.338350    4347 command_runner.go:124] > RuntimeApiVersion:  v1alpha1
	I0810 22:32:53.338599    4347 start.go:426] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.2
	RuntimeApiVersion:  v1alpha1
	I0810 22:32:53.338687    4347 ssh_runner.go:149] Run: crio --version
	I0810 22:32:53.605450    4347 command_runner.go:124] ! time="2021-08-10T22:32:53Z" level=info msg="Starting CRI-O, version: 1.20.2, git: d5a999ad0a35d895ded554e1e18c142075501a98(clean)"
	I0810 22:32:57.305530    4347 command_runner.go:124] > crio version 1.20.2
	I0810 22:32:57.305545    4347 command_runner.go:124] > Version:       1.20.2
	I0810 22:32:57.305552    4347 command_runner.go:124] > GitCommit:     d5a999ad0a35d895ded554e1e18c142075501a98
	I0810 22:32:57.305560    4347 command_runner.go:124] > GitTreeState:  clean
	I0810 22:32:57.305566    4347 command_runner.go:124] > BuildDate:     2021-08-06T09:19:16Z
	I0810 22:32:57.305570    4347 command_runner.go:124] > GoVersion:     go1.13.15
	I0810 22:32:57.305574    4347 command_runner.go:124] > Compiler:      gc
	I0810 22:32:57.305579    4347 command_runner.go:124] > Platform:      linux/amd64
	I0810 22:32:57.305600    4347 ssh_runner.go:189] Completed: crio --version: (3.966890142s)
	I0810 22:32:57.305670    4347 ssh_runner.go:149] Run: crio --version
	I0810 22:32:57.513335    4347 command_runner.go:124] ! time="2021-08-10T22:32:57Z" level=info msg="Starting CRI-O, version: 1.20.2, git: d5a999ad0a35d895ded554e1e18c142075501a98(clean)"
	I0810 22:32:57.515123    4347 command_runner.go:124] > crio version 1.20.2
	I0810 22:32:57.515146    4347 command_runner.go:124] > Version:       1.20.2
	I0810 22:32:57.515157    4347 command_runner.go:124] > GitCommit:     d5a999ad0a35d895ded554e1e18c142075501a98
	I0810 22:32:57.515164    4347 command_runner.go:124] > GitTreeState:  clean
	I0810 22:32:57.515173    4347 command_runner.go:124] > BuildDate:     2021-08-06T09:19:16Z
	I0810 22:32:57.515180    4347 command_runner.go:124] > GoVersion:     go1.13.15
	I0810 22:32:57.515190    4347 command_runner.go:124] > Compiler:      gc
	I0810 22:32:57.515197    4347 command_runner.go:124] > Platform:      linux/amd64
	I0810 22:32:57.523162    4347 out.go:177] * Preparing Kubernetes v1.21.3 on CRI-O 1.20.2 ...
	I0810 22:32:57.523248    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetIP
	I0810 22:32:57.528697    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:57.529018    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:89", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:32:38 +0000 UTC Type:0 Mac:52:54:00:ce:d8:89 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:multinode-20210810223223-30291 Clientid:01:52:54:00:ce:d8:89}
	I0810 22:32:57.529045    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined IP address 192.168.50.32 and MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:57.529208    4347 ssh_runner.go:149] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0810 22:32:57.533352    4347 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0810 22:32:57.543504    4347 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0810 22:32:57.543552    4347 ssh_runner.go:149] Run: sudo crictl images --output json
	I0810 22:32:57.612368    4347 command_runner.go:124] > {
	I0810 22:32:57.612397    4347 command_runner.go:124] >   "images": [
	I0810 22:32:57.612404    4347 command_runner.go:124] >     {
	I0810 22:32:57.612416    4347 command_runner.go:124] >       "id": "6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb",
	I0810 22:32:57.612427    4347 command_runner.go:124] >       "repoTags": [
	I0810 22:32:57.612437    4347 command_runner.go:124] >         "docker.io/kindest/kindnetd:v20210326-1e038dc5"
	I0810 22:32:57.612442    4347 command_runner.go:124] >       ],
	I0810 22:32:57.612449    4347 command_runner.go:124] >       "repoDigests": [
	I0810 22:32:57.612462    4347 command_runner.go:124] >         "docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1",
	I0810 22:32:57.612479    4347 command_runner.go:124] >         "docker.io/kindest/kindnetd@sha256:838bc1706e38391aefaa31fd52619fe8e57ad3dfb0d0ff414d902367fcc24c3c"
	I0810 22:32:57.612488    4347 command_runner.go:124] >       ],
	I0810 22:32:57.612495    4347 command_runner.go:124] >       "size": "119984626",
	I0810 22:32:57.612506    4347 command_runner.go:124] >       "uid": null,
	I0810 22:32:57.612515    4347 command_runner.go:124] >       "username": "",
	I0810 22:32:57.612524    4347 command_runner.go:124] >       "spec": null
	I0810 22:32:57.612530    4347 command_runner.go:124] >     },
	I0810 22:32:57.612535    4347 command_runner.go:124] >     {
	I0810 22:32:57.612547    4347 command_runner.go:124] >       "id": "9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db",
	I0810 22:32:57.612556    4347 command_runner.go:124] >       "repoTags": [
	I0810 22:32:57.612564    4347 command_runner.go:124] >         "docker.io/kubernetesui/dashboard:v2.1.0"
	I0810 22:32:57.612572    4347 command_runner.go:124] >       ],
	I0810 22:32:57.612579    4347 command_runner.go:124] >       "repoDigests": [
	I0810 22:32:57.612594    4347 command_runner.go:124] >         "docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f",
	I0810 22:32:57.612610    4347 command_runner.go:124] >         "docker.io/kubernetesui/dashboard@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6"
	I0810 22:32:57.612619    4347 command_runner.go:124] >       ],
	I0810 22:32:57.612625    4347 command_runner.go:124] >       "size": "228528983",
	I0810 22:32:57.612634    4347 command_runner.go:124] >       "uid": null,
	I0810 22:32:57.612641    4347 command_runner.go:124] >       "username": "nonroot",
	I0810 22:32:57.612652    4347 command_runner.go:124] >       "spec": null
	I0810 22:32:57.612658    4347 command_runner.go:124] >     },
	I0810 22:32:57.612663    4347 command_runner.go:124] >     {
	I0810 22:32:57.612687    4347 command_runner.go:124] >       "id": "86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4",
	I0810 22:32:57.612697    4347 command_runner.go:124] >       "repoTags": [
	I0810 22:32:57.612707    4347 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper:v1.0.4"
	I0810 22:32:57.612715    4347 command_runner.go:124] >       ],
	I0810 22:32:57.612722    4347 command_runner.go:124] >       "repoDigests": [
	I0810 22:32:57.612737    4347 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper@sha256:555981a24f184420f3be0c79d4efb6c948a85cfce84034f85a563f4151a81cbf",
	I0810 22:32:57.612753    4347 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper@sha256:d78f995c07124874c2a2e9b404cffa6bc6233668d63d6c6210574971f3d5914b"
	I0810 22:32:57.612761    4347 command_runner.go:124] >       ],
	I0810 22:32:57.612767    4347 command_runner.go:124] >       "size": "36950651",
	I0810 22:32:57.612774    4347 command_runner.go:124] >       "uid": null,
	I0810 22:32:57.612780    4347 command_runner.go:124] >       "username": "",
	I0810 22:32:57.612789    4347 command_runner.go:124] >       "spec": null
	I0810 22:32:57.612813    4347 command_runner.go:124] >     },
	I0810 22:32:57.612823    4347 command_runner.go:124] >     {
	I0810 22:32:57.612834    4347 command_runner.go:124] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0810 22:32:57.612845    4347 command_runner.go:124] >       "repoTags": [
	I0810 22:32:57.612851    4347 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0810 22:32:57.612857    4347 command_runner.go:124] >       ],
	I0810 22:32:57.612861    4347 command_runner.go:124] >       "repoDigests": [
	I0810 22:32:57.612870    4347 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0810 22:32:57.612881    4347 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0810 22:32:57.612887    4347 command_runner.go:124] >       ],
	I0810 22:32:57.612891    4347 command_runner.go:124] >       "size": "31470524",
	I0810 22:32:57.612898    4347 command_runner.go:124] >       "uid": null,
	I0810 22:32:57.612902    4347 command_runner.go:124] >       "username": "",
	I0810 22:32:57.612908    4347 command_runner.go:124] >       "spec": null
	I0810 22:32:57.612911    4347 command_runner.go:124] >     },
	I0810 22:32:57.612914    4347 command_runner.go:124] >     {
	I0810 22:32:57.612921    4347 command_runner.go:124] >       "id": "296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899",
	I0810 22:32:57.612927    4347 command_runner.go:124] >       "repoTags": [
	I0810 22:32:57.612933    4347 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns:v1.8.0"
	I0810 22:32:57.612939    4347 command_runner.go:124] >       ],
	I0810 22:32:57.612943    4347 command_runner.go:124] >       "repoDigests": [
	I0810 22:32:57.612952    4347 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61",
	I0810 22:32:57.612962    4347 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e"
	I0810 22:32:57.612966    4347 command_runner.go:124] >       ],
	I0810 22:32:57.612970    4347 command_runner.go:124] >       "size": "42585056",
	I0810 22:32:57.612975    4347 command_runner.go:124] >       "uid": null,
	I0810 22:32:57.612978    4347 command_runner.go:124] >       "username": "",
	I0810 22:32:57.612982    4347 command_runner.go:124] >       "spec": null
	I0810 22:32:57.612985    4347 command_runner.go:124] >     },
	I0810 22:32:57.612989    4347 command_runner.go:124] >     {
	I0810 22:32:57.612995    4347 command_runner.go:124] >       "id": "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934",
	I0810 22:32:57.613001    4347 command_runner.go:124] >       "repoTags": [
	I0810 22:32:57.613006    4347 command_runner.go:124] >         "k8s.gcr.io/etcd:3.4.13-0"
	I0810 22:32:57.613010    4347 command_runner.go:124] >       ],
	I0810 22:32:57.613014    4347 command_runner.go:124] >       "repoDigests": [
	I0810 22:32:57.613022    4347 command_runner.go:124] >         "k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2",
	I0810 22:32:57.613033    4347 command_runner.go:124] >         "k8s.gcr.io/etcd@sha256:bd4d2c9a19be8a492bc79df53eee199fd04b415e9993eb69f7718052602a147a"
	I0810 22:32:57.613041    4347 command_runner.go:124] >       ],
	I0810 22:32:57.613047    4347 command_runner.go:124] >       "size": "254662613",
	I0810 22:32:57.613065    4347 command_runner.go:124] >       "uid": null,
	I0810 22:32:57.613074    4347 command_runner.go:124] >       "username": "",
	I0810 22:32:57.613080    4347 command_runner.go:124] >       "spec": null
	I0810 22:32:57.613088    4347 command_runner.go:124] >     },
	I0810 22:32:57.613093    4347 command_runner.go:124] >     {
	I0810 22:32:57.613105    4347 command_runner.go:124] >       "id": "3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80",
	I0810 22:32:57.613114    4347 command_runner.go:124] >       "repoTags": [
	I0810 22:32:57.613121    4347 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver:v1.21.3"
	I0810 22:32:57.613130    4347 command_runner.go:124] >       ],
	I0810 22:32:57.613137    4347 command_runner.go:124] >       "repoDigests": [
	I0810 22:32:57.613153    4347 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2",
	I0810 22:32:57.613168    4347 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver@sha256:910cfdf034262c7b68ecb17c0885f39bdaaad07d87c9a5b6320819d8500b7ee5"
	I0810 22:32:57.613177    4347 command_runner.go:124] >       ],
	I0810 22:32:57.613183    4347 command_runner.go:124] >       "size": "126878961",
	I0810 22:32:57.613190    4347 command_runner.go:124] >       "uid": {
	I0810 22:32:57.613196    4347 command_runner.go:124] >         "value": "0"
	I0810 22:32:57.613202    4347 command_runner.go:124] >       },
	I0810 22:32:57.613208    4347 command_runner.go:124] >       "username": "",
	I0810 22:32:57.613217    4347 command_runner.go:124] >       "spec": null
	I0810 22:32:57.613222    4347 command_runner.go:124] >     },
	I0810 22:32:57.613227    4347 command_runner.go:124] >     {
	I0810 22:32:57.613238    4347 command_runner.go:124] >       "id": "bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9",
	I0810 22:32:57.613246    4347 command_runner.go:124] >       "repoTags": [
	I0810 22:32:57.613254    4347 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager:v1.21.3"
	I0810 22:32:57.613263    4347 command_runner.go:124] >       ],
	I0810 22:32:57.613269    4347 command_runner.go:124] >       "repoDigests": [
	I0810 22:32:57.613284    4347 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b",
	I0810 22:32:57.613298    4347 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager@sha256:7fb1f6614597c255b475ed8abf553e0d4e8ea211b06a90bed53eaddcfb9c354f"
	I0810 22:32:57.613306    4347 command_runner.go:124] >       ],
	I0810 22:32:57.613355    4347 command_runner.go:124] >       "size": "121087578",
	I0810 22:32:57.613371    4347 command_runner.go:124] >       "uid": {
	I0810 22:32:57.613377    4347 command_runner.go:124] >         "value": "0"
	I0810 22:32:57.613382    4347 command_runner.go:124] >       },
	I0810 22:32:57.613390    4347 command_runner.go:124] >       "username": "",
	I0810 22:32:57.613396    4347 command_runner.go:124] >       "spec": null
	I0810 22:32:57.613403    4347 command_runner.go:124] >     },
	I0810 22:32:57.613408    4347 command_runner.go:124] >     {
	I0810 22:32:57.613419    4347 command_runner.go:124] >       "id": "adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92",
	I0810 22:32:57.613428    4347 command_runner.go:124] >       "repoTags": [
	I0810 22:32:57.613439    4347 command_runner.go:124] >         "k8s.gcr.io/kube-proxy:v1.21.3"
	I0810 22:32:57.613447    4347 command_runner.go:124] >       ],
	I0810 22:32:57.613453    4347 command_runner.go:124] >       "repoDigests": [
	I0810 22:32:57.613466    4347 command_runner.go:124] >         "k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b",
	I0810 22:32:57.613480    4347 command_runner.go:124] >         "k8s.gcr.io/kube-proxy@sha256:c7778d7b97b2a822c3fe3e921d104ac42afbd38268de8df03557465780886627"
	I0810 22:32:57.613488    4347 command_runner.go:124] >       ],
	I0810 22:32:57.613495    4347 command_runner.go:124] >       "size": "105129702",
	I0810 22:32:57.613505    4347 command_runner.go:124] >       "uid": null,
	I0810 22:32:57.613513    4347 command_runner.go:124] >       "username": "",
	I0810 22:32:57.613520    4347 command_runner.go:124] >       "spec": null
	I0810 22:32:57.613525    4347 command_runner.go:124] >     },
	I0810 22:32:57.613531    4347 command_runner.go:124] >     {
	I0810 22:32:57.613541    4347 command_runner.go:124] >       "id": "6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a",
	I0810 22:32:57.613551    4347 command_runner.go:124] >       "repoTags": [
	I0810 22:32:57.613558    4347 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler:v1.21.3"
	I0810 22:32:57.613565    4347 command_runner.go:124] >       ],
	I0810 22:32:57.613572    4347 command_runner.go:124] >       "repoDigests": [
	I0810 22:32:57.613588    4347 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4",
	I0810 22:32:57.613605    4347 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler@sha256:b61779ea1bd936c137b25b3a7baa5551fbbd84fed8568d15c7c85ab1139521c0"
	I0810 22:32:57.613612    4347 command_runner.go:124] >       ],
	I0810 22:32:57.613619    4347 command_runner.go:124] >       "size": "51893338",
	I0810 22:32:57.613628    4347 command_runner.go:124] >       "uid": {
	I0810 22:32:57.613634    4347 command_runner.go:124] >         "value": "0"
	I0810 22:32:57.613644    4347 command_runner.go:124] >       },
	I0810 22:32:57.613652    4347 command_runner.go:124] >       "username": "",
	I0810 22:32:57.613658    4347 command_runner.go:124] >       "spec": null
	I0810 22:32:57.613669    4347 command_runner.go:124] >     },
	I0810 22:32:57.613680    4347 command_runner.go:124] >     {
	I0810 22:32:57.613690    4347 command_runner.go:124] >       "id": "0f8457a4c2ecaceac160805013dc3c61c63a1ff3dee74a473a36249a748e0253",
	I0810 22:32:57.613697    4347 command_runner.go:124] >       "repoTags": [
	I0810 22:32:57.613704    4347 command_runner.go:124] >         "k8s.gcr.io/pause:3.4.1"
	I0810 22:32:57.613710    4347 command_runner.go:124] >       ],
	I0810 22:32:57.613718    4347 command_runner.go:124] >       "repoDigests": [
	I0810 22:32:57.613729    4347 command_runner.go:124] >         "k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810",
	I0810 22:32:57.613744    4347 command_runner.go:124] >         "k8s.gcr.io/pause@sha256:914e745e524aa94315a25b49a7fafc0aa395e332126930593225d7a513f5a6b2"
	I0810 22:32:57.613751    4347 command_runner.go:124] >       ],
	I0810 22:32:57.613803    4347 command_runner.go:124] >       "size": "689817",
	I0810 22:32:57.613814    4347 command_runner.go:124] >       "uid": null,
	I0810 22:32:57.613818    4347 command_runner.go:124] >       "username": "",
	I0810 22:32:57.613825    4347 command_runner.go:124] >       "spec": null
	I0810 22:32:57.613829    4347 command_runner.go:124] >     }
	I0810 22:32:57.613832    4347 command_runner.go:124] >   ]
	I0810 22:32:57.613835    4347 command_runner.go:124] > }
	I0810 22:32:57.614037    4347 crio.go:424] all images are preloaded for cri-o runtime.
	I0810 22:32:57.614054    4347 crio.go:333] Images already preloaded, skipping extraction
	I0810 22:32:57.614104    4347 ssh_runner.go:149] Run: sudo crictl images --output json
	I0810 22:32:57.650955    4347 command_runner.go:124] > {
	I0810 22:32:57.650980    4347 command_runner.go:124] >   "images": [
	I0810 22:32:57.650984    4347 command_runner.go:124] >     {
	I0810 22:32:57.650993    4347 command_runner.go:124] >       "id": "6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb",
	I0810 22:32:57.650998    4347 command_runner.go:124] >       "repoTags": [
	I0810 22:32:57.651004    4347 command_runner.go:124] >         "docker.io/kindest/kindnetd:v20210326-1e038dc5"
	I0810 22:32:57.651008    4347 command_runner.go:124] >       ],
	I0810 22:32:57.651014    4347 command_runner.go:124] >       "repoDigests": [
	I0810 22:32:57.651028    4347 command_runner.go:124] >         "docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1",
	I0810 22:32:57.651045    4347 command_runner.go:124] >         "docker.io/kindest/kindnetd@sha256:838bc1706e38391aefaa31fd52619fe8e57ad3dfb0d0ff414d902367fcc24c3c"
	I0810 22:32:57.651053    4347 command_runner.go:124] >       ],
	I0810 22:32:57.651060    4347 command_runner.go:124] >       "size": "119984626",
	I0810 22:32:57.651068    4347 command_runner.go:124] >       "uid": null,
	I0810 22:32:57.651072    4347 command_runner.go:124] >       "username": "",
	I0810 22:32:57.651077    4347 command_runner.go:124] >       "spec": null
	I0810 22:32:57.651093    4347 command_runner.go:124] >     },
	I0810 22:32:57.651099    4347 command_runner.go:124] >     {
	I0810 22:32:57.651106    4347 command_runner.go:124] >       "id": "9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db",
	I0810 22:32:57.651113    4347 command_runner.go:124] >       "repoTags": [
	I0810 22:32:57.651122    4347 command_runner.go:124] >         "docker.io/kubernetesui/dashboard:v2.1.0"
	I0810 22:32:57.651128    4347 command_runner.go:124] >       ],
	I0810 22:32:57.651136    4347 command_runner.go:124] >       "repoDigests": [
	I0810 22:32:57.651154    4347 command_runner.go:124] >         "docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f",
	I0810 22:32:57.651170    4347 command_runner.go:124] >         "docker.io/kubernetesui/dashboard@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6"
	I0810 22:32:57.651177    4347 command_runner.go:124] >       ],
	I0810 22:32:57.651183    4347 command_runner.go:124] >       "size": "228528983",
	I0810 22:32:57.651188    4347 command_runner.go:124] >       "uid": null,
	I0810 22:32:57.651193    4347 command_runner.go:124] >       "username": "nonroot",
	I0810 22:32:57.651209    4347 command_runner.go:124] >       "spec": null
	I0810 22:32:57.651218    4347 command_runner.go:124] >     },
	I0810 22:32:57.651223    4347 command_runner.go:124] >     {
	I0810 22:32:57.651235    4347 command_runner.go:124] >       "id": "86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4",
	I0810 22:32:57.651242    4347 command_runner.go:124] >       "repoTags": [
	I0810 22:32:57.651251    4347 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper:v1.0.4"
	I0810 22:32:57.651257    4347 command_runner.go:124] >       ],
	I0810 22:32:57.651265    4347 command_runner.go:124] >       "repoDigests": [
	I0810 22:32:57.651278    4347 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper@sha256:555981a24f184420f3be0c79d4efb6c948a85cfce84034f85a563f4151a81cbf",
	I0810 22:32:57.651291    4347 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper@sha256:d78f995c07124874c2a2e9b404cffa6bc6233668d63d6c6210574971f3d5914b"
	I0810 22:32:57.651297    4347 command_runner.go:124] >       ],
	I0810 22:32:57.651304    4347 command_runner.go:124] >       "size": "36950651",
	I0810 22:32:57.651310    4347 command_runner.go:124] >       "uid": null,
	I0810 22:32:57.651318    4347 command_runner.go:124] >       "username": "",
	I0810 22:32:57.651336    4347 command_runner.go:124] >       "spec": null
	I0810 22:32:57.651344    4347 command_runner.go:124] >     },
	I0810 22:32:57.651350    4347 command_runner.go:124] >     {
	I0810 22:32:57.651362    4347 command_runner.go:124] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0810 22:32:57.651368    4347 command_runner.go:124] >       "repoTags": [
	I0810 22:32:57.651375    4347 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0810 22:32:57.651379    4347 command_runner.go:124] >       ],
	I0810 22:32:57.651387    4347 command_runner.go:124] >       "repoDigests": [
	I0810 22:32:57.651403    4347 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0810 22:32:57.651419    4347 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0810 22:32:57.651426    4347 command_runner.go:124] >       ],
	I0810 22:32:57.651433    4347 command_runner.go:124] >       "size": "31470524",
	I0810 22:32:57.651449    4347 command_runner.go:124] >       "uid": null,
	I0810 22:32:57.651458    4347 command_runner.go:124] >       "username": "",
	I0810 22:32:57.651463    4347 command_runner.go:124] >       "spec": null
	I0810 22:32:57.651467    4347 command_runner.go:124] >     },
	I0810 22:32:57.651472    4347 command_runner.go:124] >     {
	I0810 22:32:57.651482    4347 command_runner.go:124] >       "id": "296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899",
	I0810 22:32:57.651493    4347 command_runner.go:124] >       "repoTags": [
	I0810 22:32:57.651503    4347 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns:v1.8.0"
	I0810 22:32:57.651512    4347 command_runner.go:124] >       ],
	I0810 22:32:57.651518    4347 command_runner.go:124] >       "repoDigests": [
	I0810 22:32:57.651531    4347 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61",
	I0810 22:32:57.651544    4347 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e"
	I0810 22:32:57.651550    4347 command_runner.go:124] >       ],
	I0810 22:32:57.651556    4347 command_runner.go:124] >       "size": "42585056",
	I0810 22:32:57.651560    4347 command_runner.go:124] >       "uid": null,
	I0810 22:32:57.651566    4347 command_runner.go:124] >       "username": "",
	I0810 22:32:57.651573    4347 command_runner.go:124] >       "spec": null
	I0810 22:32:57.651578    4347 command_runner.go:124] >     },
	I0810 22:32:57.651585    4347 command_runner.go:124] >     {
	I0810 22:32:57.651595    4347 command_runner.go:124] >       "id": "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934",
	I0810 22:32:57.651608    4347 command_runner.go:124] >       "repoTags": [
	I0810 22:32:57.651615    4347 command_runner.go:124] >         "k8s.gcr.io/etcd:3.4.13-0"
	I0810 22:32:57.651621    4347 command_runner.go:124] >       ],
	I0810 22:32:57.651628    4347 command_runner.go:124] >       "repoDigests": [
	I0810 22:32:57.651639    4347 command_runner.go:124] >         "k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2",
	I0810 22:32:57.651650    4347 command_runner.go:124] >         "k8s.gcr.io/etcd@sha256:bd4d2c9a19be8a492bc79df53eee199fd04b415e9993eb69f7718052602a147a"
	I0810 22:32:57.651655    4347 command_runner.go:124] >       ],
	I0810 22:32:57.651663    4347 command_runner.go:124] >       "size": "254662613",
	I0810 22:32:57.651669    4347 command_runner.go:124] >       "uid": null,
	I0810 22:32:57.651676    4347 command_runner.go:124] >       "username": "",
	I0810 22:32:57.651682    4347 command_runner.go:124] >       "spec": null
	I0810 22:32:57.651687    4347 command_runner.go:124] >     },
	I0810 22:32:57.651694    4347 command_runner.go:124] >     {
	I0810 22:32:57.651704    4347 command_runner.go:124] >       "id": "3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80",
	I0810 22:32:57.651714    4347 command_runner.go:124] >       "repoTags": [
	I0810 22:32:57.651722    4347 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver:v1.21.3"
	I0810 22:32:57.651731    4347 command_runner.go:124] >       ],
	I0810 22:32:57.651736    4347 command_runner.go:124] >       "repoDigests": [
	I0810 22:32:57.651749    4347 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2",
	I0810 22:32:57.651767    4347 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver@sha256:910cfdf034262c7b68ecb17c0885f39bdaaad07d87c9a5b6320819d8500b7ee5"
	I0810 22:32:57.651776    4347 command_runner.go:124] >       ],
	I0810 22:32:57.651783    4347 command_runner.go:124] >       "size": "126878961",
	I0810 22:32:57.651790    4347 command_runner.go:124] >       "uid": {
	I0810 22:32:57.651796    4347 command_runner.go:124] >         "value": "0"
	I0810 22:32:57.651803    4347 command_runner.go:124] >       },
	I0810 22:32:57.651809    4347 command_runner.go:124] >       "username": "",
	I0810 22:32:57.651816    4347 command_runner.go:124] >       "spec": null
	I0810 22:32:57.651821    4347 command_runner.go:124] >     },
	I0810 22:32:57.651827    4347 command_runner.go:124] >     {
	I0810 22:32:57.651834    4347 command_runner.go:124] >       "id": "bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9",
	I0810 22:32:57.651842    4347 command_runner.go:124] >       "repoTags": [
	I0810 22:32:57.651850    4347 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager:v1.21.3"
	I0810 22:32:57.651858    4347 command_runner.go:124] >       ],
	I0810 22:32:57.651864    4347 command_runner.go:124] >       "repoDigests": [
	I0810 22:32:57.651877    4347 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b",
	I0810 22:32:57.651893    4347 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager@sha256:7fb1f6614597c255b475ed8abf553e0d4e8ea211b06a90bed53eaddcfb9c354f"
	I0810 22:32:57.651901    4347 command_runner.go:124] >       ],
	I0810 22:32:57.651911    4347 command_runner.go:124] >       "size": "121087578",
	I0810 22:32:57.651919    4347 command_runner.go:124] >       "uid": {
	I0810 22:32:57.651927    4347 command_runner.go:124] >         "value": "0"
	I0810 22:32:57.651934    4347 command_runner.go:124] >       },
	I0810 22:32:57.651954    4347 command_runner.go:124] >       "username": "",
	I0810 22:32:57.651964    4347 command_runner.go:124] >       "spec": null
	I0810 22:32:57.651970    4347 command_runner.go:124] >     },
	I0810 22:32:57.651975    4347 command_runner.go:124] >     {
	I0810 22:32:57.651987    4347 command_runner.go:124] >       "id": "adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92",
	I0810 22:32:57.651994    4347 command_runner.go:124] >       "repoTags": [
	I0810 22:32:57.652002    4347 command_runner.go:124] >         "k8s.gcr.io/kube-proxy:v1.21.3"
	I0810 22:32:57.652007    4347 command_runner.go:124] >       ],
	I0810 22:32:57.652012    4347 command_runner.go:124] >       "repoDigests": [
	I0810 22:32:57.652024    4347 command_runner.go:124] >         "k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b",
	I0810 22:32:57.652038    4347 command_runner.go:124] >         "k8s.gcr.io/kube-proxy@sha256:c7778d7b97b2a822c3fe3e921d104ac42afbd38268de8df03557465780886627"
	I0810 22:32:57.652045    4347 command_runner.go:124] >       ],
	I0810 22:32:57.652052    4347 command_runner.go:124] >       "size": "105129702",
	I0810 22:32:57.652059    4347 command_runner.go:124] >       "uid": null,
	I0810 22:32:57.652066    4347 command_runner.go:124] >       "username": "",
	I0810 22:32:57.652073    4347 command_runner.go:124] >       "spec": null
	I0810 22:32:57.652078    4347 command_runner.go:124] >     },
	I0810 22:32:57.652087    4347 command_runner.go:124] >     {
	I0810 22:32:57.652099    4347 command_runner.go:124] >       "id": "6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a",
	I0810 22:32:57.652104    4347 command_runner.go:124] >       "repoTags": [
	I0810 22:32:57.652129    4347 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler:v1.21.3"
	I0810 22:32:57.652136    4347 command_runner.go:124] >       ],
	I0810 22:32:57.652150    4347 command_runner.go:124] >       "repoDigests": [
	I0810 22:32:57.652166    4347 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4",
	I0810 22:32:57.652179    4347 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler@sha256:b61779ea1bd936c137b25b3a7baa5551fbbd84fed8568d15c7c85ab1139521c0"
	I0810 22:32:57.652186    4347 command_runner.go:124] >       ],
	I0810 22:32:57.652191    4347 command_runner.go:124] >       "size": "51893338",
	I0810 22:32:57.652195    4347 command_runner.go:124] >       "uid": {
	I0810 22:32:57.652201    4347 command_runner.go:124] >         "value": "0"
	I0810 22:32:57.652208    4347 command_runner.go:124] >       },
	I0810 22:32:57.652214    4347 command_runner.go:124] >       "username": "",
	I0810 22:32:57.652222    4347 command_runner.go:124] >       "spec": null
	I0810 22:32:57.652228    4347 command_runner.go:124] >     },
	I0810 22:32:57.652234    4347 command_runner.go:124] >     {
	I0810 22:32:57.652244    4347 command_runner.go:124] >       "id": "0f8457a4c2ecaceac160805013dc3c61c63a1ff3dee74a473a36249a748e0253",
	I0810 22:32:57.652251    4347 command_runner.go:124] >       "repoTags": [
	I0810 22:32:57.652258    4347 command_runner.go:124] >         "k8s.gcr.io/pause:3.4.1"
	I0810 22:32:57.652268    4347 command_runner.go:124] >       ],
	I0810 22:32:57.652275    4347 command_runner.go:124] >       "repoDigests": [
	I0810 22:32:57.652283    4347 command_runner.go:124] >         "k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810",
	I0810 22:32:57.652296    4347 command_runner.go:124] >         "k8s.gcr.io/pause@sha256:914e745e524aa94315a25b49a7fafc0aa395e332126930593225d7a513f5a6b2"
	I0810 22:32:57.652302    4347 command_runner.go:124] >       ],
	I0810 22:32:57.652309    4347 command_runner.go:124] >       "size": "689817",
	I0810 22:32:57.652316    4347 command_runner.go:124] >       "uid": null,
	I0810 22:32:57.652323    4347 command_runner.go:124] >       "username": "",
	I0810 22:32:57.652329    4347 command_runner.go:124] >       "spec": null
	I0810 22:32:57.652335    4347 command_runner.go:124] >     }
	I0810 22:32:57.652340    4347 command_runner.go:124] >   ]
	I0810 22:32:57.652346    4347 command_runner.go:124] > }
	I0810 22:32:57.653173    4347 crio.go:424] all images are preloaded for cri-o runtime.
	I0810 22:32:57.653195    4347 cache_images.go:74] Images are preloaded, skipping loading
	I0810 22:32:57.653286    4347 ssh_runner.go:149] Run: crio config
	I0810 22:32:57.851973    4347 command_runner.go:124] ! time="2021-08-10T22:32:57Z" level=info msg="Starting CRI-O, version: 1.20.2, git: d5a999ad0a35d895ded554e1e18c142075501a98(clean)"
	I0810 22:32:57.855728    4347 command_runner.go:124] ! time="2021-08-10T22:32:57Z" level=warning msg="The 'registries' option in crio.conf(5) (referenced in \"/etc/crio/crio.conf\") has been deprecated and will be removed with CRI-O 1.21."
	I0810 22:32:57.855771    4347 command_runner.go:124] ! time="2021-08-10T22:32:57Z" level=warning msg="Please refer to containers-registries.conf(5) for configuring unqualified-search registries."
	I0810 22:32:57.858116    4347 command_runner.go:124] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0810 22:32:57.862859    4347 command_runner.go:124] > # The CRI-O configuration file specifies all of the available configuration
	I0810 22:32:57.862882    4347 command_runner.go:124] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0810 22:32:57.862893    4347 command_runner.go:124] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0810 22:32:57.862898    4347 command_runner.go:124] > #
	I0810 22:32:57.862906    4347 command_runner.go:124] > # Please refer to crio.conf(5) for details of all configuration options.
	I0810 22:32:57.862914    4347 command_runner.go:124] > # CRI-O supports partial configuration reload during runtime, which can be
	I0810 22:32:57.862920    4347 command_runner.go:124] > # done by sending SIGHUP to the running process. Currently supported options
	I0810 22:32:57.862929    4347 command_runner.go:124] > # are explicitly mentioned with: 'This option supports live configuration
	I0810 22:32:57.862934    4347 command_runner.go:124] > # reload'.
	I0810 22:32:57.862941    4347 command_runner.go:124] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0810 22:32:57.862990    4347 command_runner.go:124] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0810 22:32:57.863005    4347 command_runner.go:124] > # you want to change the system's defaults. If you want to modify storage just
	I0810 22:32:57.863019    4347 command_runner.go:124] > # for CRI-O, you can change the storage configuration options here.
	I0810 22:32:57.863023    4347 command_runner.go:124] > [crio]
	I0810 22:32:57.863037    4347 command_runner.go:124] > # Path to the "root directory". CRI-O stores all of its data, including
	I0810 22:32:57.863073    4347 command_runner.go:124] > # containers images, in this directory.
	I0810 22:32:57.863089    4347 command_runner.go:124] > #root = "/var/lib/containers/storage"
	I0810 22:32:57.863112    4347 command_runner.go:124] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0810 22:32:57.863126    4347 command_runner.go:124] > #runroot = "/var/run/containers/storage"
	I0810 22:32:57.863153    4347 command_runner.go:124] > # Storage driver used to manage the storage of images and containers. Please
	I0810 22:32:57.863166    4347 command_runner.go:124] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0810 22:32:57.863176    4347 command_runner.go:124] > #storage_driver = "overlay"
	I0810 22:32:57.863187    4347 command_runner.go:124] > # List to pass options to the storage driver. Please refer to
	I0810 22:32:57.863199    4347 command_runner.go:124] > # containers-storage.conf(5) to see all available storage options.
	I0810 22:32:57.863206    4347 command_runner.go:124] > #storage_option = [
	I0810 22:32:57.863210    4347 command_runner.go:124] > #]
	I0810 22:32:57.863221    4347 command_runner.go:124] > # The default log directory where all logs will go unless directly specified by
	I0810 22:32:57.863234    4347 command_runner.go:124] > # the kubelet. The log directory specified must be an absolute directory.
	I0810 22:32:57.863245    4347 command_runner.go:124] > log_dir = "/var/log/crio/pods"
	I0810 22:32:57.863255    4347 command_runner.go:124] > # Location for CRI-O to lay down the temporary version file.
	I0810 22:32:57.863268    4347 command_runner.go:124] > # It is used to check if crio wipe should wipe containers, which should
	I0810 22:32:57.863278    4347 command_runner.go:124] > # always happen on a node reboot
	I0810 22:32:57.863287    4347 command_runner.go:124] > version_file = "/var/run/crio/version"
	I0810 22:32:57.863297    4347 command_runner.go:124] > # Location for CRI-O to lay down the persistent version file.
	I0810 22:32:57.863305    4347 command_runner.go:124] > # It is used to check if crio wipe should wipe images, which should
	I0810 22:32:57.863314    4347 command_runner.go:124] > # only happen when CRI-O has been upgraded
	I0810 22:32:57.863329    4347 command_runner.go:124] > version_file_persist = "/var/lib/crio/version"
	I0810 22:32:57.863343    4347 command_runner.go:124] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0810 22:32:57.863351    4347 command_runner.go:124] > [crio.api]
	I0810 22:32:57.863360    4347 command_runner.go:124] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0810 22:32:57.863370    4347 command_runner.go:124] > listen = "/var/run/crio/crio.sock"
	I0810 22:32:57.863378    4347 command_runner.go:124] > # IP address on which the stream server will listen.
	I0810 22:32:57.863396    4347 command_runner.go:124] > stream_address = "127.0.0.1"
	I0810 22:32:57.863410    4347 command_runner.go:124] > # The port on which the stream server will listen. If the port is set to "0", then
	I0810 22:32:57.863422    4347 command_runner.go:124] > # CRI-O will allocate a random free port number.
	I0810 22:32:57.863432    4347 command_runner.go:124] > stream_port = "0"
	I0810 22:32:57.863442    4347 command_runner.go:124] > # Enable encrypted TLS transport of the stream server.
	I0810 22:32:57.863450    4347 command_runner.go:124] > stream_enable_tls = false
	I0810 22:32:57.863459    4347 command_runner.go:124] > # Length of time until open streams terminate due to lack of activity
	I0810 22:32:57.863468    4347 command_runner.go:124] > stream_idle_timeout = ""
	I0810 22:32:57.863478    4347 command_runner.go:124] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0810 22:32:57.863489    4347 command_runner.go:124] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0810 22:32:57.863498    4347 command_runner.go:124] > # minutes.
	I0810 22:32:57.863504    4347 command_runner.go:124] > stream_tls_cert = ""
	I0810 22:32:57.863517    4347 command_runner.go:124] > # Path to the key file used to serve the encrypted stream. This file can
	I0810 22:32:57.863530    4347 command_runner.go:124] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0810 22:32:57.863539    4347 command_runner.go:124] > stream_tls_key = ""
	I0810 22:32:57.863549    4347 command_runner.go:124] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0810 22:32:57.863562    4347 command_runner.go:124] > # communication with the encrypted stream. This file can change and CRI-O will
	I0810 22:32:57.863571    4347 command_runner.go:124] > # automatically pick up the changes within 5 minutes.
	I0810 22:32:57.863576    4347 command_runner.go:124] > stream_tls_ca = ""
	I0810 22:32:57.863588    4347 command_runner.go:124] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0810 22:32:57.863599    4347 command_runner.go:124] > grpc_max_send_msg_size = 16777216
	I0810 22:32:57.863613    4347 command_runner.go:124] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0810 22:32:57.863624    4347 command_runner.go:124] > grpc_max_recv_msg_size = 16777216
	I0810 22:32:57.863635    4347 command_runner.go:124] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0810 22:32:57.863644    4347 command_runner.go:124] > # and options for how to set up and manage the OCI runtime.
	I0810 22:32:57.863653    4347 command_runner.go:124] > [crio.runtime]
	I0810 22:32:57.863661    4347 command_runner.go:124] > # A list of ulimits to be set in containers by default, specified as
	I0810 22:32:57.863671    4347 command_runner.go:124] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0810 22:32:57.863680    4347 command_runner.go:124] > # "nofile=1024:2048"
	I0810 22:32:57.863690    4347 command_runner.go:124] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0810 22:32:57.863697    4347 command_runner.go:124] > #default_ulimits = [
	I0810 22:32:57.863702    4347 command_runner.go:124] > #]
	I0810 22:32:57.863713    4347 command_runner.go:124] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0810 22:32:57.863723    4347 command_runner.go:124] > no_pivot = false
	I0810 22:32:57.863732    4347 command_runner.go:124] > # decryption_keys_path is the path where the keys required for
	I0810 22:32:57.863758    4347 command_runner.go:124] > # image decryption are stored. This option supports live configuration reload.
	I0810 22:32:57.863769    4347 command_runner.go:124] > decryption_keys_path = "/etc/crio/keys/"
	I0810 22:32:57.863780    4347 command_runner.go:124] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0810 22:32:57.863796    4347 command_runner.go:124] > # Will be searched for using $PATH if empty.
	I0810 22:32:57.863805    4347 command_runner.go:124] > conmon = "/usr/libexec/crio/conmon"
	I0810 22:32:57.863812    4347 command_runner.go:124] > # Cgroup setting for conmon
	I0810 22:32:57.863819    4347 command_runner.go:124] > conmon_cgroup = "system.slice"
	I0810 22:32:57.863832    4347 command_runner.go:124] > # Environment variable list for the conmon process, used for passing necessary
	I0810 22:32:57.863841    4347 command_runner.go:124] > # environment variables to conmon or the runtime.
	I0810 22:32:57.863845    4347 command_runner.go:124] > conmon_env = [
	I0810 22:32:57.863854    4347 command_runner.go:124] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0810 22:32:57.863862    4347 command_runner.go:124] > ]
	I0810 22:32:57.863871    4347 command_runner.go:124] > # Additional environment variables to set for all the
	I0810 22:32:57.863882    4347 command_runner.go:124] > # containers. These are overridden if set in the
	I0810 22:32:57.863893    4347 command_runner.go:124] > # container image spec or in the container runtime configuration.
	I0810 22:32:57.863899    4347 command_runner.go:124] > default_env = [
	I0810 22:32:57.863906    4347 command_runner.go:124] > ]
	I0810 22:32:57.863916    4347 command_runner.go:124] > # If true, SELinux will be used for pod separation on the host.
	I0810 22:32:57.863924    4347 command_runner.go:124] > selinux = false
	I0810 22:32:57.863933    4347 command_runner.go:124] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0810 22:32:57.863946    4347 command_runner.go:124] > # for the runtime. If not specified, then the internal default seccomp profile
	I0810 22:32:57.863959    4347 command_runner.go:124] > # will be used. This option supports live configuration reload.
	I0810 22:32:57.863970    4347 command_runner.go:124] > seccomp_profile = ""
	I0810 22:32:57.863980    4347 command_runner.go:124] > # Changes the meaning of an empty seccomp profile. By default
	I0810 22:32:57.863991    4347 command_runner.go:124] > # (and according to CRI spec), an empty profile means unconfined.
	I0810 22:32:57.864004    4347 command_runner.go:124] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0810 22:32:57.864013    4347 command_runner.go:124] > # which might increase security.
	I0810 22:32:57.864021    4347 command_runner.go:124] > seccomp_use_default_when_empty = false
	I0810 22:32:57.864028    4347 command_runner.go:124] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0810 22:32:57.864041    4347 command_runner.go:124] > # profile name is "crio-default". This profile only takes effect if the user
	I0810 22:32:57.864052    4347 command_runner.go:124] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0810 22:32:57.864065    4347 command_runner.go:124] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0810 22:32:57.864073    4347 command_runner.go:124] > # This option supports live configuration reload.
	I0810 22:32:57.864083    4347 command_runner.go:124] > apparmor_profile = "crio-default"
	I0810 22:32:57.864094    4347 command_runner.go:124] > # Used to change irqbalance service config file path which is used for configuring
	I0810 22:32:57.864103    4347 command_runner.go:124] > # irqbalance daemon.
	I0810 22:32:57.864111    4347 command_runner.go:124] > irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0810 22:32:57.864138    4347 command_runner.go:124] > # Cgroup management implementation used for the runtime.
	I0810 22:32:57.864151    4347 command_runner.go:124] > cgroup_manager = "systemd"
	I0810 22:32:57.864163    4347 command_runner.go:124] > # Specify whether the image pull must be performed in a separate cgroup.
	I0810 22:32:57.864173    4347 command_runner.go:124] > separate_pull_cgroup = ""
	I0810 22:32:57.864184    4347 command_runner.go:124] > # List of default capabilities for containers. If it is empty or commented out,
	I0810 22:32:57.864200    4347 command_runner.go:124] > # only the capabilities defined in the containers json file by the user/kube
	I0810 22:32:57.864207    4347 command_runner.go:124] > # will be added.
	I0810 22:32:57.864212    4347 command_runner.go:124] > default_capabilities = [
	I0810 22:32:57.864219    4347 command_runner.go:124] > 	"CHOWN",
	I0810 22:32:57.864225    4347 command_runner.go:124] > 	"DAC_OVERRIDE",
	I0810 22:32:57.864232    4347 command_runner.go:124] > 	"FSETID",
	I0810 22:32:57.864238    4347 command_runner.go:124] > 	"FOWNER",
	I0810 22:32:57.864245    4347 command_runner.go:124] > 	"SETGID",
	I0810 22:32:57.864250    4347 command_runner.go:124] > 	"SETUID",
	I0810 22:32:57.864257    4347 command_runner.go:124] > 	"SETPCAP",
	I0810 22:32:57.864264    4347 command_runner.go:124] > 	"NET_BIND_SERVICE",
	I0810 22:32:57.864272    4347 command_runner.go:124] > 	"KILL",
	I0810 22:32:57.864278    4347 command_runner.go:124] > ]
	I0810 22:32:57.864291    4347 command_runner.go:124] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0810 22:32:57.864303    4347 command_runner.go:124] > # defined in the container json file by the user/kube will be added.
	I0810 22:32:57.864313    4347 command_runner.go:124] > default_sysctls = [
	I0810 22:32:57.864319    4347 command_runner.go:124] > ]
	I0810 22:32:57.864327    4347 command_runner.go:124] > # List of additional devices. specified as
	I0810 22:32:57.864342    4347 command_runner.go:124] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0810 22:32:57.864353    4347 command_runner.go:124] > #If it is empty or commented out, only the devices
	I0810 22:32:57.864365    4347 command_runner.go:124] > # defined in the container json file by the user/kube will be added.
	I0810 22:32:57.864374    4347 command_runner.go:124] > additional_devices = [
	I0810 22:32:57.864379    4347 command_runner.go:124] > ]
	I0810 22:32:57.864387    4347 command_runner.go:124] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0810 22:32:57.864398    4347 command_runner.go:124] > # directories does not exist, then CRI-O will automatically skip them.
	I0810 22:32:57.864404    4347 command_runner.go:124] > hooks_dir = [
	I0810 22:32:57.864412    4347 command_runner.go:124] > 	"/usr/share/containers/oci/hooks.d",
	I0810 22:32:57.864419    4347 command_runner.go:124] > ]
	I0810 22:32:57.864430    4347 command_runner.go:124] > # Path to the file specifying the defaults mounts for each container. The
	I0810 22:32:57.864443    4347 command_runner.go:124] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0810 22:32:57.864454    4347 command_runner.go:124] > # its default mounts from the following two files:
	I0810 22:32:57.864462    4347 command_runner.go:124] > #
	I0810 22:32:57.864472    4347 command_runner.go:124] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0810 22:32:57.864482    4347 command_runner.go:124] > #      override file, where users can either add in their own default mounts, or
	I0810 22:32:57.864491    4347 command_runner.go:124] > #      override the default mounts shipped with the package.
	I0810 22:32:57.864499    4347 command_runner.go:124] > #
	I0810 22:32:57.864509    4347 command_runner.go:124] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0810 22:32:57.864523    4347 command_runner.go:124] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0810 22:32:57.864537    4347 command_runner.go:124] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0810 22:32:57.864548    4347 command_runner.go:124] > #      only add mounts it finds in this file.
	I0810 22:32:57.864555    4347 command_runner.go:124] > #
	I0810 22:32:57.864562    4347 command_runner.go:124] > #default_mounts_file = ""
	I0810 22:32:57.864570    4347 command_runner.go:124] > # Maximum number of processes allowed in a container.
	I0810 22:32:57.864579    4347 command_runner.go:124] > pids_limit = 1024
	I0810 22:32:57.864593    4347 command_runner.go:124] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0810 22:32:57.864607    4347 command_runner.go:124] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0810 22:32:57.864621    4347 command_runner.go:124] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0810 22:32:57.864634    4347 command_runner.go:124] > # limit is never exceeded.
	I0810 22:32:57.864643    4347 command_runner.go:124] > log_size_max = -1
	I0810 22:32:57.864673    4347 command_runner.go:124] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0810 22:32:57.864705    4347 command_runner.go:124] > log_to_journald = false
	I0810 22:32:57.864715    4347 command_runner.go:124] > # Path to directory in which container exit files are written to by conmon.
	I0810 22:32:57.864723    4347 command_runner.go:124] > container_exits_dir = "/var/run/crio/exits"
	I0810 22:32:57.864732    4347 command_runner.go:124] > # Path to directory for container attach sockets.
	I0810 22:32:57.864740    4347 command_runner.go:124] > container_attach_socket_dir = "/var/run/crio"
	I0810 22:32:57.864749    4347 command_runner.go:124] > # The prefix to use for the source of the bind mounts.
	I0810 22:32:57.864757    4347 command_runner.go:124] > bind_mount_prefix = ""
	I0810 22:32:57.864770    4347 command_runner.go:124] > # If set to true, all containers will run in read-only mode.
	I0810 22:32:57.864779    4347 command_runner.go:124] > read_only = false
	I0810 22:32:57.864789    4347 command_runner.go:124] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0810 22:32:57.864803    4347 command_runner.go:124] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0810 22:32:57.864814    4347 command_runner.go:124] > # live configuration reload.
	I0810 22:32:57.864822    4347 command_runner.go:124] > log_level = "info"
	I0810 22:32:57.864832    4347 command_runner.go:124] > # Filter the log messages by the provided regular expression.
	I0810 22:32:57.864841    4347 command_runner.go:124] > # This option supports live configuration reload.
	I0810 22:32:57.864846    4347 command_runner.go:124] > log_filter = ""
	I0810 22:32:57.864859    4347 command_runner.go:124] > # The UID mappings for the user namespace of each container. A range is
	I0810 22:32:57.864873    4347 command_runner.go:124] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0810 22:32:57.864883    4347 command_runner.go:124] > # separated by comma.
	I0810 22:32:57.864892    4347 command_runner.go:124] > uid_mappings = ""
	I0810 22:32:57.864902    4347 command_runner.go:124] > # The GID mappings for the user namespace of each container. A range is
	I0810 22:32:57.864915    4347 command_runner.go:124] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0810 22:32:57.864924    4347 command_runner.go:124] > # separated by comma.
	I0810 22:32:57.864930    4347 command_runner.go:124] > gid_mappings = ""
	I0810 22:32:57.864939    4347 command_runner.go:124] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0810 22:32:57.864953    4347 command_runner.go:124] > # regarding the proper termination of the container. The lowest possible
	I0810 22:32:57.864967    4347 command_runner.go:124] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0810 22:32:57.864976    4347 command_runner.go:124] > ctr_stop_timeout = 30
	I0810 22:32:57.864986    4347 command_runner.go:124] > # manage_ns_lifecycle determines whether we pin and remove namespaces
	I0810 22:32:57.864996    4347 command_runner.go:124] > # and manage their lifecycle.
	I0810 22:32:57.865006    4347 command_runner.go:124] > # This option is being deprecated, and will be unconditionally true in the future.
	I0810 22:32:57.865014    4347 command_runner.go:124] > manage_ns_lifecycle = true
	I0810 22:32:57.865021    4347 command_runner.go:124] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0810 22:32:57.865036    4347 command_runner.go:124] > # when a pod does not have a private PID namespace, and does not use
	I0810 22:32:57.865047    4347 command_runner.go:124] > # a kernel separating runtime (like kata).
	I0810 22:32:57.865055    4347 command_runner.go:124] > # It requires manage_ns_lifecycle to be true.
	I0810 22:32:57.865065    4347 command_runner.go:124] > drop_infra_ctr = false
	I0810 22:32:57.865075    4347 command_runner.go:124] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0810 22:32:57.865087    4347 command_runner.go:124] > # You can use linux CPU list format to specify desired CPUs.
	I0810 22:32:57.865098    4347 command_runner.go:124] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0810 22:32:57.865104    4347 command_runner.go:124] > # infra_ctr_cpuset = ""
	I0810 22:32:57.865110    4347 command_runner.go:124] > # The directory where the state of the managed namespaces gets tracked.
	I0810 22:32:57.865118    4347 command_runner.go:124] > # Only used when manage_ns_lifecycle is true.
	I0810 22:32:57.865124    4347 command_runner.go:124] > namespaces_dir = "/var/run"
	I0810 22:32:57.865136    4347 command_runner.go:124] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0810 22:32:57.865147    4347 command_runner.go:124] > pinns_path = "/usr/bin/pinns"
	I0810 22:32:57.865157    4347 command_runner.go:124] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0810 22:32:57.865167    4347 command_runner.go:124] > # The name is matched against the runtimes map below. If this value is changed,
	I0810 22:32:57.865177    4347 command_runner.go:124] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0810 22:32:57.865185    4347 command_runner.go:124] > default_runtime = "runc"
	I0810 22:32:57.865194    4347 command_runner.go:124] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0810 22:32:57.865206    4347 command_runner.go:124] > # The runtime to use is picked based on the runtime_handler provided by the CRI.
	I0810 22:32:57.865220    4347 command_runner.go:124] > # If no runtime_handler is provided, the runtime will be picked based on the level
	I0810 22:32:57.865233    4347 command_runner.go:124] > # of trust of the workload. Each entry in the table should follow the format:
	I0810 22:32:57.865242    4347 command_runner.go:124] > #
	I0810 22:32:57.865251    4347 command_runner.go:124] > #[crio.runtime.runtimes.runtime-handler]
	I0810 22:32:57.865261    4347 command_runner.go:124] > #  runtime_path = "/path/to/the/executable"
	I0810 22:32:57.865271    4347 command_runner.go:124] > #  runtime_type = "oci"
	I0810 22:32:57.865279    4347 command_runner.go:124] > #  runtime_root = "/path/to/the/root"
	I0810 22:32:57.865285    4347 command_runner.go:124] > #  privileged_without_host_devices = false
	I0810 22:32:57.865292    4347 command_runner.go:124] > #  allowed_annotations = []
	I0810 22:32:57.865301    4347 command_runner.go:124] > # Where:
	I0810 22:32:57.865310    4347 command_runner.go:124] > # - runtime-handler: name used to identify the runtime
	I0810 22:32:57.865325    4347 command_runner.go:124] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0810 22:32:57.865340    4347 command_runner.go:124] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0810 22:32:57.865354    4347 command_runner.go:124] > #   the runtime executable name, and the runtime executable should be placed
	I0810 22:32:57.865363    4347 command_runner.go:124] > #   in $PATH.
	I0810 22:32:57.865372    4347 command_runner.go:124] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0810 22:32:57.865383    4347 command_runner.go:124] > #   omitted, an "oci" runtime is assumed.
	I0810 22:32:57.865396    4347 command_runner.go:124] > # - runtime_root (optional, string): root directory for storage of containers
	I0810 22:32:57.865406    4347 command_runner.go:124] > #   state.
	I0810 22:32:57.865419    4347 command_runner.go:124] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0810 22:32:57.865431    4347 command_runner.go:124] > #   host devices from being passed to privileged containers.
	I0810 22:32:57.865445    4347 command_runner.go:124] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0810 22:32:57.865460    4347 command_runner.go:124] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0810 22:32:57.865471    4347 command_runner.go:124] > #   The currently recognized values are:
	I0810 22:32:57.865485    4347 command_runner.go:124] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0810 22:32:57.865496    4347 command_runner.go:124] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0810 22:32:57.865509    4347 command_runner.go:124] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0810 22:32:57.865519    4347 command_runner.go:124] > [crio.runtime.runtimes.runc]
	I0810 22:32:57.865527    4347 command_runner.go:124] > runtime_path = "/usr/bin/runc"
	I0810 22:32:57.865535    4347 command_runner.go:124] > runtime_type = "oci"
	I0810 22:32:57.865543    4347 command_runner.go:124] > runtime_root = "/run/runc"
	I0810 22:32:57.865553    4347 command_runner.go:124] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0810 22:32:57.865559    4347 command_runner.go:124] > # running containers
	I0810 22:32:57.865567    4347 command_runner.go:124] > #[crio.runtime.runtimes.crun]
	I0810 22:32:57.865580    4347 command_runner.go:124] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0810 22:32:57.865591    4347 command_runner.go:124] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0810 22:32:57.865605    4347 command_runner.go:124] > # surface and mitigating the consequences of containers breakout.
	I0810 22:32:57.865618    4347 command_runner.go:124] > # Kata Containers with the default configured VMM
	I0810 22:32:57.865629    4347 command_runner.go:124] > #[crio.runtime.runtimes.kata-runtime]
	I0810 22:32:57.865637    4347 command_runner.go:124] > # Kata Containers with the QEMU VMM
	I0810 22:32:57.865645    4347 command_runner.go:124] > #[crio.runtime.runtimes.kata-qemu]
	I0810 22:32:57.865653    4347 command_runner.go:124] > # Kata Containers with the Firecracker VMM
	I0810 22:32:57.865663    4347 command_runner.go:124] > #[crio.runtime.runtimes.kata-fc]
	I0810 22:32:57.865675    4347 command_runner.go:124] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0810 22:32:57.865683    4347 command_runner.go:124] > #
	I0810 22:32:57.865693    4347 command_runner.go:124] > # CRI-O reads its configured registries defaults from the system wide
	I0810 22:32:57.865706    4347 command_runner.go:124] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0810 22:32:57.865719    4347 command_runner.go:124] > # you want to modify just CRI-O, you can change the registries configuration in
	I0810 22:32:57.865731    4347 command_runner.go:124] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0810 22:32:57.865738    4347 command_runner.go:124] > # use the system's defaults from /etc/containers/registries.conf.
	I0810 22:32:57.865746    4347 command_runner.go:124] > [crio.image]
	I0810 22:32:57.865768    4347 command_runner.go:124] > # Default transport for pulling images from a remote container storage.
	I0810 22:32:57.865779    4347 command_runner.go:124] > default_transport = "docker://"
	I0810 22:32:57.865791    4347 command_runner.go:124] > # The path to a file containing credentials necessary for pulling images from
	I0810 22:32:57.865804    4347 command_runner.go:124] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0810 22:32:57.865816    4347 command_runner.go:124] > global_auth_file = ""
	I0810 22:32:57.865828    4347 command_runner.go:124] > # The image used to instantiate infra containers.
	I0810 22:32:57.865836    4347 command_runner.go:124] > # This option supports live configuration reload.
	I0810 22:32:57.865847    4347 command_runner.go:124] > pause_image = "k8s.gcr.io/pause:3.4.1"
	I0810 22:32:57.865858    4347 command_runner.go:124] > # The path to a file containing credentials specific for pulling the pause_image from
	I0810 22:32:57.865871    4347 command_runner.go:124] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0810 22:32:57.865882    4347 command_runner.go:124] > # This option supports live configuration reload.
	I0810 22:32:57.865893    4347 command_runner.go:124] > pause_image_auth_file = ""
	I0810 22:32:57.865904    4347 command_runner.go:124] > # The command to run to have a container stay in the paused state.
	I0810 22:32:57.865913    4347 command_runner.go:124] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0810 22:32:57.865925    4347 command_runner.go:124] > # specified in the pause image. When commented out, it will fallback to the
	I0810 22:32:57.865935    4347 command_runner.go:124] > # default: "/pause". This option supports live configuration reload.
	I0810 22:32:57.865945    4347 command_runner.go:124] > pause_command = "/pause"
	I0810 22:32:57.865957    4347 command_runner.go:124] > # Path to the file which decides what sort of policy we use when deciding
	I0810 22:32:57.865970    4347 command_runner.go:124] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0810 22:32:57.865983    4347 command_runner.go:124] > # this option be used, as the default behavior of using the system-wide default
	I0810 22:32:57.865995    4347 command_runner.go:124] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0810 22:32:57.866004    4347 command_runner.go:124] > # refer to containers-policy.json(5) for more details.
	I0810 22:32:57.866009    4347 command_runner.go:124] > signature_policy = ""
	I0810 22:32:57.866021    4347 command_runner.go:124] > # List of registries to skip TLS verification for pulling images. Please
	I0810 22:32:57.866032    4347 command_runner.go:124] > # consider configuring the registries via /etc/containers/registries.conf before
	I0810 22:32:57.866042    4347 command_runner.go:124] > # changing them here.
	I0810 22:32:57.866049    4347 command_runner.go:124] > #insecure_registries = "[]"
	I0810 22:32:57.866061    4347 command_runner.go:124] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0810 22:32:57.866070    4347 command_runner.go:124] > # ignore; the latter will ignore volumes entirely.
	I0810 22:32:57.866078    4347 command_runner.go:124] > image_volumes = "mkdir"
	I0810 22:32:57.866088    4347 command_runner.go:124] > # List of registries to be used when pulling an unqualified image (e.g.,
	I0810 22:32:57.866097    4347 command_runner.go:124] > # "alpine:latest"). By default, registries is set to "docker.io" for
	I0810 22:32:57.866107    4347 command_runner.go:124] > # compatibility reasons. Depending on your workload and usecase you may add more
	I0810 22:32:57.866120    4347 command_runner.go:124] > # registries (e.g., "quay.io", "registry.fedoraproject.org",
	I0810 22:32:57.866129    4347 command_runner.go:124] > # "registry.opensuse.org", etc.).
	I0810 22:32:57.866135    4347 command_runner.go:124] > #registries = [
	I0810 22:32:57.866145    4347 command_runner.go:124] > # 	"docker.io",
	I0810 22:32:57.866151    4347 command_runner.go:124] > #]
	I0810 22:32:57.866160    4347 command_runner.go:124] > # Temporary directory to use for storing big files
	I0810 22:32:57.866168    4347 command_runner.go:124] > big_files_temporary_dir = ""
	I0810 22:32:57.866179    4347 command_runner.go:124] > # The crio.network table containers settings pertaining to the management of
	I0810 22:32:57.866184    4347 command_runner.go:124] > # CNI plugins.
	I0810 22:32:57.866192    4347 command_runner.go:124] > [crio.network]
	I0810 22:32:57.866202    4347 command_runner.go:124] > # The default CNI network name to be selected. If not set or "", then
	I0810 22:32:57.866216    4347 command_runner.go:124] > # CRI-O will pick-up the first one found in network_dir.
	I0810 22:32:57.866225    4347 command_runner.go:124] > # cni_default_network = "kindnet"
	I0810 22:32:57.866235    4347 command_runner.go:124] > # Path to the directory where CNI configuration files are located.
	I0810 22:32:57.866245    4347 command_runner.go:124] > network_dir = "/etc/cni/net.d/"
	I0810 22:32:57.866254    4347 command_runner.go:124] > # Paths to directories where CNI plugin binaries are located.
	I0810 22:32:57.866264    4347 command_runner.go:124] > plugin_dirs = [
	I0810 22:32:57.866271    4347 command_runner.go:124] > 	"/opt/cni/bin/",
	I0810 22:32:57.866275    4347 command_runner.go:124] > ]
	I0810 22:32:57.866282    4347 command_runner.go:124] > # A necessary configuration for Prometheus based metrics retrieval
	I0810 22:32:57.866291    4347 command_runner.go:124] > [crio.metrics]
	I0810 22:32:57.866300    4347 command_runner.go:124] > # Globally enable or disable metrics support.
	I0810 22:32:57.866310    4347 command_runner.go:124] > enable_metrics = true
	I0810 22:32:57.866322    4347 command_runner.go:124] > # The port on which the metrics server will listen.
	I0810 22:32:57.866331    4347 command_runner.go:124] > metrics_port = 9090
	I0810 22:32:57.866401    4347 command_runner.go:124] > # Local socket path to bind the metrics server to
	I0810 22:32:57.866416    4347 command_runner.go:124] > metrics_socket = ""
	I0810 22:32:57.866486    4347 cni.go:93] Creating CNI manager for ""
	I0810 22:32:57.866504    4347 cni.go:154] 1 nodes found, recommending kindnet
	I0810 22:32:57.866519    4347 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0810 22:32:57.866536    4347 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.32 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-20210810223223-30291 NodeName:multinode-20210810223223-30291 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.50.32 CgroupDriver:systemd ClientCAFile:/v
ar/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0810 22:32:57.866698    4347 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.32
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "multinode-20210810223223-30291"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0810 22:32:57.866809    4347 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --hostname-override=multinode-20210810223223-30291 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.32 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:multinode-20210810223223-30291 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0810 22:32:57.866876    4347 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0810 22:32:57.874702    4347 command_runner.go:124] > kubeadm
	I0810 22:32:57.874716    4347 command_runner.go:124] > kubectl
	I0810 22:32:57.874720    4347 command_runner.go:124] > kubelet
	I0810 22:32:57.874911    4347 binaries.go:44] Found k8s binaries, skipping transfer
	I0810 22:32:57.874965    4347 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0810 22:32:57.881899    4347 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (510 bytes)
	I0810 22:32:57.893370    4347 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0810 22:32:57.904537    4347 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2074 bytes)
	I0810 22:32:57.915861    4347 ssh_runner.go:149] Run: grep 192.168.50.32	control-plane.minikube.internal$ /etc/hosts
	I0810 22:32:57.920843    4347 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.32	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0810 22:32:57.931480    4347 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291 for IP: 192.168.50.32
	I0810 22:32:57.931530    4347 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key
	I0810 22:32:57.931547    4347 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key
	I0810 22:32:57.931595    4347 certs.go:294] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/client.key
	I0810 22:32:57.931610    4347 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/client.crt with IP's: []
	I0810 22:32:58.003323    4347 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/client.crt ...
	I0810 22:32:58.003356    4347 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/client.crt: {Name:mk17a539d20321f4db5af5b2734d077b910d767c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:32:58.003566    4347 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/client.key ...
	I0810 22:32:58.003579    4347 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/client.key: {Name:mkf91e68e3a24af11429ac7001aa796033230923 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:32:58.003666    4347 certs.go:294] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/apiserver.key.d5d970b2
	I0810 22:32:58.003678    4347 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/apiserver.crt.d5d970b2 with IP's: [192.168.50.32 10.96.0.1 127.0.0.1 10.0.0.1]
	I0810 22:32:58.188567    4347 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/apiserver.crt.d5d970b2 ...
	I0810 22:32:58.188599    4347 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/apiserver.crt.d5d970b2: {Name:mk4c9f9fdbfe34760c33271a67021f8f00eb74cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:32:58.188786    4347 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/apiserver.key.d5d970b2 ...
	I0810 22:32:58.188799    4347 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/apiserver.key.d5d970b2: {Name:mk1765a0ac8d1c92eb5b9f050679d0d9d4659cf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:32:58.188876    4347 certs.go:305] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/apiserver.crt.d5d970b2 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/apiserver.crt
	I0810 22:32:58.188939    4347 certs.go:309] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/apiserver.key.d5d970b2 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/apiserver.key
	I0810 22:32:58.188994    4347 certs.go:294] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/proxy-client.key
	I0810 22:32:58.189002    4347 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/proxy-client.crt with IP's: []
	I0810 22:32:58.299072    4347 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/proxy-client.crt ...
	I0810 22:32:58.299104    4347 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/proxy-client.crt: {Name:mke338d5688758093711da9f55ca5536a523d43a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:32:58.299308    4347 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/proxy-client.key ...
	I0810 22:32:58.299368    4347 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/proxy-client.key: {Name:mk78c6b536821d16696f16ce642cf1181cdc7730 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:32:58.299469    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0810 22:32:58.299488    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0810 22:32:58.299498    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0810 22:32:58.299507    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0810 22:32:58.299519    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0810 22:32:58.299535    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0810 22:32:58.299548    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0810 22:32:58.299561    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0810 22:32:58.299611    4347 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/30291.pem (1338 bytes)
	W0810 22:32:58.299659    4347 certs.go:369] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/30291_empty.pem, impossibly tiny 0 bytes
	I0810 22:32:58.299675    4347 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem (1679 bytes)
	I0810 22:32:58.299708    4347 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem (1082 bytes)
	I0810 22:32:58.299731    4347 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem (1123 bytes)
	I0810 22:32:58.299753    4347 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem (1679 bytes)
	I0810 22:32:58.299797    4347 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/302912.pem (1708 bytes)
	I0810 22:32:58.299824    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0810 22:32:58.299839    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/30291.pem -> /usr/share/ca-certificates/30291.pem
	I0810 22:32:58.299850    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/302912.pem -> /usr/share/ca-certificates/302912.pem
	I0810 22:32:58.300769    4347 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0810 22:32:58.317752    4347 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0810 22:32:58.334154    4347 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0810 22:32:58.350238    4347 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0810 22:32:58.366402    4347 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0810 22:32:58.383292    4347 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0810 22:32:58.399598    4347 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0810 22:32:58.416865    4347 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0810 22:32:58.433080    4347 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0810 22:32:58.449061    4347 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/30291.pem --> /usr/share/ca-certificates/30291.pem (1338 bytes)
	I0810 22:32:58.464946    4347 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/302912.pem --> /usr/share/ca-certificates/302912.pem (1708 bytes)
	I0810 22:32:58.481673    4347 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0810 22:32:58.492957    4347 ssh_runner.go:149] Run: openssl version
	I0810 22:32:58.498798    4347 command_runner.go:124] > OpenSSL 1.1.1k  25 Mar 2021
	I0810 22:32:58.498856    4347 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0810 22:32:58.506504    4347 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0810 22:32:58.511065    4347 command_runner.go:124] > -rw-r--r-- 1 root root 1111 Aug 10 22:18 /usr/share/ca-certificates/minikubeCA.pem
	I0810 22:32:58.511199    4347 certs.go:416] hashing: -rw-r--r-- 1 root root 1111 Aug 10 22:18 /usr/share/ca-certificates/minikubeCA.pem
	I0810 22:32:58.511248    4347 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0810 22:32:58.517002    4347 command_runner.go:124] > b5213941
	I0810 22:32:58.517066    4347 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0810 22:32:58.524968    4347 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/30291.pem && ln -fs /usr/share/ca-certificates/30291.pem /etc/ssl/certs/30291.pem"
	I0810 22:32:58.532847    4347 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/30291.pem
	I0810 22:32:58.537289    4347 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Aug 10 22:27 /usr/share/ca-certificates/30291.pem
	I0810 22:32:58.537315    4347 certs.go:416] hashing: -rw-r--r-- 1 root root 1338 Aug 10 22:27 /usr/share/ca-certificates/30291.pem
	I0810 22:32:58.537348    4347 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/30291.pem
	I0810 22:32:58.543181    4347 command_runner.go:124] > 51391683
	I0810 22:32:58.543227    4347 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/30291.pem /etc/ssl/certs/51391683.0"
	I0810 22:32:58.550803    4347 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/302912.pem && ln -fs /usr/share/ca-certificates/302912.pem /etc/ssl/certs/302912.pem"
	I0810 22:32:58.558884    4347 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/302912.pem
	I0810 22:32:58.564254    4347 command_runner.go:124] > -rw-r--r-- 1 root root 1708 Aug 10 22:27 /usr/share/ca-certificates/302912.pem
	I0810 22:32:58.564286    4347 certs.go:416] hashing: -rw-r--r-- 1 root root 1708 Aug 10 22:27 /usr/share/ca-certificates/302912.pem
	I0810 22:32:58.564324    4347 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/302912.pem
	I0810 22:32:58.570156    4347 command_runner.go:124] > 3ec20f2e
	I0810 22:32:58.570208    4347 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/302912.pem /etc/ssl/certs/3ec20f2e.0"
	I0810 22:32:58.578077    4347 kubeadm.go:390] StartCluster: {Name:multinode-20210810223223-30291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12122/minikube-v1.22.0-1628238775-12122.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:multinode-2
0210810223223-30291 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.32 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0}
	I0810 22:32:58.578160    4347 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0810 22:32:58.578197    4347 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0810 22:32:58.610996    4347 cri.go:76] found id: ""
	I0810 22:32:58.611056    4347 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0810 22:32:58.617953    4347 command_runner.go:124] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0810 22:32:58.618042    4347 command_runner.go:124] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0810 22:32:58.618087    4347 command_runner.go:124] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0810 22:32:58.618303    4347 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0810 22:32:58.625001    4347 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0810 22:32:58.631226    4347 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0810 22:32:58.631253    4347 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0810 22:32:58.631265    4347 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0810 22:32:58.631277    4347 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0810 22:32:58.631317    4347 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0810 22:32:58.631359    4347 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0810 22:32:59.088872    4347 command_runner.go:124] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0810 22:33:19.855261    4347 command_runner.go:124] > [init] Using Kubernetes version: v1.21.3
	I0810 22:33:19.855342    4347 command_runner.go:124] > [preflight] Running pre-flight checks
	I0810 22:33:19.855418    4347 command_runner.go:124] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0810 22:33:19.855553    4347 command_runner.go:124] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0810 22:33:19.855710    4347 command_runner.go:124] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0810 22:33:19.857400    4347 out.go:204]   - Generating certificates and keys ...
	I0810 22:33:19.855905    4347 command_runner.go:124] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0810 22:33:19.857506    4347 command_runner.go:124] > [certs] Using existing ca certificate authority
	I0810 22:33:19.857596    4347 command_runner.go:124] > [certs] Using existing apiserver certificate and key on disk
	I0810 22:33:19.857703    4347 command_runner.go:124] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0810 22:33:19.857789    4347 command_runner.go:124] > [certs] Generating "front-proxy-ca" certificate and key
	I0810 22:33:19.857918    4347 command_runner.go:124] > [certs] Generating "front-proxy-client" certificate and key
	I0810 22:33:19.857998    4347 command_runner.go:124] > [certs] Generating "etcd/ca" certificate and key
	I0810 22:33:19.858067    4347 command_runner.go:124] > [certs] Generating "etcd/server" certificate and key
	I0810 22:33:19.858264    4347 command_runner.go:124] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-20210810223223-30291] and IPs [192.168.50.32 127.0.0.1 ::1]
	I0810 22:33:19.858337    4347 command_runner.go:124] > [certs] Generating "etcd/peer" certificate and key
	I0810 22:33:19.858509    4347 command_runner.go:124] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-20210810223223-30291] and IPs [192.168.50.32 127.0.0.1 ::1]
	I0810 22:33:19.858602    4347 command_runner.go:124] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0810 22:33:19.858694    4347 command_runner.go:124] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0810 22:33:19.858758    4347 command_runner.go:124] > [certs] Generating "sa" key and public key
	I0810 22:33:19.858827    4347 command_runner.go:124] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0810 22:33:19.858890    4347 command_runner.go:124] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0810 22:33:19.858961    4347 command_runner.go:124] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0810 22:33:19.859061    4347 command_runner.go:124] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0810 22:33:19.859143    4347 command_runner.go:124] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0810 22:33:19.859278    4347 command_runner.go:124] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0810 22:33:19.859383    4347 command_runner.go:124] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0810 22:33:19.859439    4347 command_runner.go:124] > [kubelet-start] Starting the kubelet
	I0810 22:33:19.860954    4347 out.go:204]   - Booting up control plane ...
	I0810 22:33:19.859602    4347 command_runner.go:124] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0810 22:33:19.861065    4347 command_runner.go:124] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0810 22:33:19.861163    4347 command_runner.go:124] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0810 22:33:19.861244    4347 command_runner.go:124] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0810 22:33:19.861352    4347 command_runner.go:124] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0810 22:33:19.861519    4347 command_runner.go:124] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0810 22:33:19.861617    4347 command_runner.go:124] > [apiclient] All control plane components are healthy after 16.015766 seconds
	I0810 22:33:19.861760    4347 command_runner.go:124] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0810 22:33:19.861936    4347 command_runner.go:124] > [kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
	I0810 22:33:19.861992    4347 command_runner.go:124] > [upload-certs] Skipping phase. Please see --upload-certs
	I0810 22:33:19.862191    4347 command_runner.go:124] > [mark-control-plane] Marking the node multinode-20210810223223-30291 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0810 22:33:19.863747    4347 out.go:204]   - Configuring RBAC rules ...
	I0810 22:33:19.862296    4347 command_runner.go:124] > [bootstrap-token] Using token: jxsfae.kz2mrngz77ughh9a
	I0810 22:33:19.863866    4347 command_runner.go:124] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0810 22:33:19.863971    4347 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0810 22:33:19.864151    4347 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0810 22:33:19.864278    4347 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0810 22:33:19.864376    4347 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0810 22:33:19.864457    4347 command_runner.go:124] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0810 22:33:19.864553    4347 command_runner.go:124] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0810 22:33:19.864598    4347 command_runner.go:124] > [addons] Applied essential addon: CoreDNS
	I0810 22:33:19.864641    4347 command_runner.go:124] > [addons] Applied essential addon: kube-proxy
	I0810 22:33:19.864693    4347 command_runner.go:124] > Your Kubernetes control-plane has initialized successfully!
	I0810 22:33:19.864767    4347 command_runner.go:124] > To start using your cluster, you need to run the following as a regular user:
	I0810 22:33:19.864820    4347 command_runner.go:124] >   mkdir -p $HOME/.kube
	I0810 22:33:19.864910    4347 command_runner.go:124] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0810 22:33:19.865001    4347 command_runner.go:124] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0810 22:33:19.865090    4347 command_runner.go:124] > Alternatively, if you are the root user, you can run:
	I0810 22:33:19.865166    4347 command_runner.go:124] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0810 22:33:19.865245    4347 command_runner.go:124] > You should now deploy a pod network to the cluster.
	I0810 22:33:19.865320    4347 command_runner.go:124] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0810 22:33:19.865381    4347 command_runner.go:124] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0810 22:33:19.865452    4347 command_runner.go:124] > You can now join any number of control-plane nodes by copying certificate authorities
	I0810 22:33:19.865518    4347 command_runner.go:124] > and service account keys on each node and then running the following as root:
	I0810 22:33:19.865595    4347 command_runner.go:124] >   kubeadm join control-plane.minikube.internal:8443 --token jxsfae.kz2mrngz77ughh9a \
	I0810 22:33:19.865682    4347 command_runner.go:124] > 	--discovery-token-ca-cert-hash sha256:792de24c5d5a120bf4aa3a25755c9ac1b4ccaeb2dbca2444b5b705903a56bd34 \
	I0810 22:33:19.865701    4347 command_runner.go:124] > 	--control-plane 
	I0810 22:33:19.865795    4347 command_runner.go:124] > Then you can join any number of worker nodes by running the following on each as root:
	I0810 22:33:19.865943    4347 command_runner.go:124] > kubeadm join control-plane.minikube.internal:8443 --token jxsfae.kz2mrngz77ughh9a \
	I0810 22:33:19.866030    4347 command_runner.go:124] > 	--discovery-token-ca-cert-hash sha256:792de24c5d5a120bf4aa3a25755c9ac1b4ccaeb2dbca2444b5b705903a56bd34 
	I0810 22:33:19.866052    4347 cni.go:93] Creating CNI manager for ""
	I0810 22:33:19.866061    4347 cni.go:154] 1 nodes found, recommending kindnet
	I0810 22:33:19.867722    4347 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0810 22:33:19.867788    4347 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0810 22:33:19.876032    4347 command_runner.go:124] >   File: /opt/cni/bin/portmap
	I0810 22:33:19.876052    4347 command_runner.go:124] >   Size: 2853400   	Blocks: 5576       IO Block: 4096   regular file
	I0810 22:33:19.876059    4347 command_runner.go:124] > Device: 10h/16d	Inode: 22873       Links: 1
	I0810 22:33:19.876069    4347 command_runner.go:124] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0810 22:33:19.876077    4347 command_runner.go:124] > Access: 2021-08-10 22:32:38.220478056 +0000
	I0810 22:33:19.876087    4347 command_runner.go:124] > Modify: 2021-08-06 09:23:24.000000000 +0000
	I0810 22:33:19.876095    4347 command_runner.go:124] > Change: 2021-08-10 22:32:33.951478056 +0000
	I0810 22:33:19.876102    4347 command_runner.go:124] >  Birth: -
	I0810 22:33:19.876486    4347 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0810 22:33:19.876501    4347 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0810 22:33:19.912205    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0810 22:33:20.321470    4347 command_runner.go:124] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0810 22:33:20.350894    4347 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0810 22:33:20.370625    4347 command_runner.go:124] > serviceaccount/kindnet created
	I0810 22:33:20.389242    4347 command_runner.go:124] > daemonset.apps/kindnet created
	I0810 22:33:20.391588    4347 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0810 22:33:20.391674    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:33:20.391693    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=877a5691753f15214a0c269ac69dcdc5a4d99fcd minikube.k8s.io/name=multinode-20210810223223-30291 minikube.k8s.io/updated_at=2021_08_10T22_33_20_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:33:20.407345    4347 command_runner.go:124] > -16
	I0810 22:33:20.407383    4347 ops.go:34] apiserver oom_adj: -16
	I0810 22:33:20.570795    4347 command_runner.go:124] > node/multinode-20210810223223-30291 labeled
	I0810 22:33:20.572850    4347 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0810 22:33:20.572929    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:33:20.680992    4347 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:33:21.183711    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:33:21.285793    4347 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:33:21.683325    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:33:21.787393    4347 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:33:22.183217    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:33:22.280960    4347 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:33:22.683627    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:33:22.786300    4347 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:33:23.184001    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:33:23.281890    4347 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:33:23.683719    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:33:23.783715    4347 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:33:24.183071    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:33:24.283417    4347 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:33:24.683673    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:33:24.782075    4347 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:33:25.183170    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:33:25.462085    4347 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:33:25.683395    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:33:25.808072    4347 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:33:26.183714    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:33:26.287318    4347 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:33:26.684099    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:33:26.788608    4347 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:33:27.183123    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:33:27.295503    4347 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:33:27.683181    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:33:27.790159    4347 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:33:28.183200    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:33:28.289849    4347 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:33:28.683553    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:33:28.794165    4347 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:33:29.184031    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:33:29.289855    4347 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:33:29.683422    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:33:29.949111    4347 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:33:30.183826    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:33:30.301589    4347 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:33:30.683318    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:33:30.795011    4347 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:33:31.183583    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:33:31.317516    4347 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:33:31.683042    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:33:31.802754    4347 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:33:32.183318    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:33:32.312324    4347 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:33:32.683672    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:33:32.805073    4347 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:33:33.184046    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:33:33.348038    4347 command_runner.go:124] > NAME      SECRETS   AGE
	I0810 22:33:33.348064    4347 command_runner.go:124] > default   0         0s
	I0810 22:33:33.349048    4347 kubeadm.go:985] duration metric: took 12.957439157s to wait for elevateKubeSystemPrivileges.
	I0810 22:33:33.349073    4347 kubeadm.go:392] StartCluster complete in 34.771003448s
	I0810 22:33:33.349094    4347 settings.go:142] acquiring lock: {Name:mk9de8b97604ec8ec02e9734983b03b6308517c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:33:33.349231    4347 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0810 22:33:33.350223    4347 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig: {Name:mkb7fc7bcea695301999150daa705ac3e8a4c8a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:33:33.350677    4347 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0810 22:33:33.350929    4347 kapi.go:59] client config for multinode-20210810223223-30291: &rest.Config{Host:"https://192.168.50.32:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-302
91/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2660), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0810 22:33:33.351462    4347 cert_rotation.go:137] Starting client certificate rotation controller
	I0810 22:33:33.353044    4347 round_trippers.go:432] GET https://192.168.50.32:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0810 22:33:33.353062    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:33.353069    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:33.353075    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:33.371626    4347 round_trippers.go:457] Response Status: 200 OK in 18 milliseconds
	I0810 22:33:33.371650    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:33.371656    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:33.371661    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:33.371665    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:33.371670    4347 round_trippers.go:463]     Content-Length: 291
	I0810 22:33:33.371677    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:33 GMT
	I0810 22:33:33.371684    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:33.374939    4347 request.go:1123] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"8e58ff6b-1a8a-468a-822b-501079499c83","resourceVersion":"263","creationTimestamp":"2021-08-10T22:33:19Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0810 22:33:33.375876    4347 request.go:1123] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"8e58ff6b-1a8a-468a-822b-501079499c83","resourceVersion":"263","creationTimestamp":"2021-08-10T22:33:19Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0810 22:33:33.375961    4347 round_trippers.go:432] PUT https://192.168.50.32:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0810 22:33:33.375976    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:33.375985    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:33.375993    4347 round_trippers.go:442]     Content-Type: application/json
	I0810 22:33:33.375999    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:33.386678    4347 round_trippers.go:457] Response Status: 200 OK in 10 milliseconds
	I0810 22:33:33.386698    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:33.386704    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:33.386708    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:33.386711    4347 round_trippers.go:463]     Content-Length: 291
	I0810 22:33:33.386714    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:33 GMT
	I0810 22:33:33.386717    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:33.386720    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:33.387457    4347 request.go:1123] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"8e58ff6b-1a8a-468a-822b-501079499c83","resourceVersion":"400","creationTimestamp":"2021-08-10T22:33:19Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0810 22:33:33.887951    4347 round_trippers.go:432] GET https://192.168.50.32:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0810 22:33:33.887978    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:33.887984    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:33.887989    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:33.913547    4347 round_trippers.go:457] Response Status: 200 OK in 25 milliseconds
	I0810 22:33:33.913578    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:33.913586    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:33.913591    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:33.913595    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:33.913600    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:33.913604    4347 round_trippers.go:463]     Content-Length: 291
	I0810 22:33:33.913609    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:33 GMT
	I0810 22:33:33.921020    4347 request.go:1123] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"8e58ff6b-1a8a-468a-822b-501079499c83","resourceVersion":"413","creationTimestamp":"2021-08-10T22:33:19Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0810 22:33:33.921152    4347 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "multinode-20210810223223-30291" rescaled to 1
	I0810 22:33:33.921211    4347 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.50.32 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0810 22:33:33.922878    4347 out.go:177] * Verifying Kubernetes components...
	I0810 22:33:33.922947    4347 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0810 22:33:33.921263    4347 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0810 22:33:33.921286    4347 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0810 22:33:33.923064    4347 addons.go:59] Setting storage-provisioner=true in profile "multinode-20210810223223-30291"
	I0810 22:33:33.923087    4347 addons.go:135] Setting addon storage-provisioner=true in "multinode-20210810223223-30291"
	W0810 22:33:33.923094    4347 addons.go:147] addon storage-provisioner should already be in state true
	I0810 22:33:33.923123    4347 host.go:66] Checking if "multinode-20210810223223-30291" exists ...
	I0810 22:33:33.923065    4347 addons.go:59] Setting default-storageclass=true in profile "multinode-20210810223223-30291"
	I0810 22:33:33.923170    4347 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-20210810223223-30291"
	I0810 22:33:33.923559    4347 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:33:33.923603    4347 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:33:33.923615    4347 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:33:33.923657    4347 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:33:33.935187    4347 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:32915
	I0810 22:33:33.935713    4347 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:33:33.936343    4347 main.go:130] libmachine: Using API Version  1
	I0810 22:33:33.936389    4347 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:33:33.936785    4347 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:33:33.937293    4347 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:33:33.937343    4347 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:33:33.945252    4347 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35925
	I0810 22:33:33.945682    4347 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:33:33.946201    4347 main.go:130] libmachine: Using API Version  1
	I0810 22:33:33.946233    4347 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:33:33.946626    4347 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:33:33.946838    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetState
	I0810 22:33:33.948560    4347 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35591
	I0810 22:33:33.948970    4347 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:33:33.949430    4347 main.go:130] libmachine: Using API Version  1
	I0810 22:33:33.949456    4347 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:33:33.949782    4347 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:33:33.949963    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetState
	I0810 22:33:33.951183    4347 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0810 22:33:33.951464    4347 kapi.go:59] client config for multinode-20210810223223-30291: &rest.Config{Host:"https://192.168.50.32:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-302
91/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2660), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0810 22:33:33.953059    4347 round_trippers.go:432] GET https://192.168.50.32:8443/apis/storage.k8s.io/v1/storageclasses
	I0810 22:33:33.953077    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:33.953090    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:33.953097    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:33.953137    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .DriverName
	I0810 22:33:33.955126    4347 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0810 22:33:33.955233    4347 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0810 22:33:33.955248    4347 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0810 22:33:33.955271    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHHostname
	I0810 22:33:33.960477    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:33:33.960870    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:89", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:32:38 +0000 UTC Type:0 Mac:52:54:00:ce:d8:89 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:multinode-20210810223223-30291 Clientid:01:52:54:00:ce:d8:89}
	I0810 22:33:33.960904    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined IP address 192.168.50.32 and MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:33:33.960995    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHPort
	I0810 22:33:33.961169    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHKeyPath
	I0810 22:33:33.961314    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHUsername
	I0810 22:33:33.961476    4347 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223223-30291/id_rsa Username:docker}
	I0810 22:33:33.965291    4347 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0810 22:33:33.965523    4347 kapi.go:59] client config for multinode-20210810223223-30291: &rest.Config{Host:"https://192.168.50.32:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-302
91/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2660), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0810 22:33:33.965934    4347 round_trippers.go:457] Response Status: 200 OK in 12 milliseconds
	I0810 22:33:33.965949    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:33.965954    4347 round_trippers.go:463]     Content-Length: 109
	I0810 22:33:33.965959    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:33 GMT
	I0810 22:33:33.965963    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:33.965968    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:33.965972    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:33.965976    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:33.966008    4347 request.go:1123] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"427"},"items":[]}
	I0810 22:33:33.966591    4347 addons.go:135] Setting addon default-storageclass=true in "multinode-20210810223223-30291"
	W0810 22:33:33.966608    4347 addons.go:147] addon default-storageclass should already be in state true
	I0810 22:33:33.966635    4347 host.go:66] Checking if "multinode-20210810223223-30291" exists ...
	I0810 22:33:33.966794    4347 node_ready.go:35] waiting up to 6m0s for node "multinode-20210810223223-30291" to be "Ready" ...
	I0810 22:33:33.966867    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291
	I0810 22:33:33.966878    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:33.966885    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:33.966895    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:33.966961    4347 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:33:33.967011    4347 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:33:33.973657    4347 round_trippers.go:457] Response Status: 200 OK in 6 milliseconds
	I0810 22:33:33.973677    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:33.973684    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:33.973689    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:33.973693    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:33.973698    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:33 GMT
	I0810 22:33:33.973702    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:33.978021    4347 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42673
	I0810 22:33:33.978515    4347 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:33:33.979008    4347 main.go:130] libmachine: Using API Version  1
	I0810 22:33:33.979030    4347 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:33:33.979385    4347 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:33:33.979875    4347 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:33:33.979912    4347 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:33:33.980553    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","resourceVersion":"406","creationTimestamp":"2021-08-10T22:33:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223223-30291","minikube.k8s.io/updated_at":"2021_08_10T22_33_20_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22 [truncated 6553 chars]
	I0810 22:33:33.981915    4347 node_ready.go:49] node "multinode-20210810223223-30291" has status "Ready":"True"
	I0810 22:33:33.981932    4347 node_ready.go:38] duration metric: took 15.119864ms waiting for node "multinode-20210810223223-30291" to be "Ready" ...
	I0810 22:33:33.981944    4347 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0810 22:33:33.982029    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods
	I0810 22:33:33.982045    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:33.982052    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:33.982058    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:33.990499    4347 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43719
	I0810 22:33:33.990938    4347 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:33:33.991400    4347 main.go:130] libmachine: Using API Version  1
	I0810 22:33:33.991426    4347 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:33:33.991750    4347 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:33:33.991947    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetState
	I0810 22:33:33.994945    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .DriverName
	I0810 22:33:33.995199    4347 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0810 22:33:33.995219    4347 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0810 22:33:33.995240    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHHostname
	I0810 22:33:33.996016    4347 round_trippers.go:457] Response Status: 200 OK in 13 milliseconds
	I0810 22:33:33.996033    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:33.996039    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:33.996045    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:33 GMT
	I0810 22:33:33.996058    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:33.996065    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:33.996070    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:34.000696    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:33:34.001087    4347 request.go:1123] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"429"},"items":[{"metadata":{"name":"etcd-multinode-20210810223223-30291","namespace":"kube-system","uid":"8498143e-4386-44bc-9541-3193bd504c1d","resourceVersion":"297","creationTimestamp":"2021-08-10T22:33:25Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.50.32:2379","kubernetes.io/config.hash":"ee4e4232c1192224bf90edfa1030cde5","kubernetes.io/config.mirror":"ee4e4232c1192224bf90edfa1030cde5","kubernetes.io/config.seen":"2021-08-10T22:33:24.968043630Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:33:28Z","fieldsType":"FieldsV1","fieldsV1":{
"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.adv [truncated 40045 chars]
	I0810 22:33:34.001123    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:89", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:32:38 +0000 UTC Type:0 Mac:52:54:00:ce:d8:89 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:multinode-20210810223223-30291 Clientid:01:52:54:00:ce:d8:89}
	I0810 22:33:34.001150    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined IP address 192.168.50.32 and MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:33:34.001308    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHPort
	I0810 22:33:34.001469    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHKeyPath
	I0810 22:33:34.001608    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHUsername
	I0810 22:33:34.001753    4347 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223223-30291/id_rsa Username:docker}
	I0810 22:33:34.008864    4347 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-20210810223223-30291" in "kube-system" namespace to be "Ready" ...
	I0810 22:33:34.008964    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210810223223-30291
	I0810 22:33:34.008979    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:34.008987    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:34.008993    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:34.021173    4347 round_trippers.go:457] Response Status: 200 OK in 12 milliseconds
	I0810 22:33:34.021193    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:34.021199    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:34.021203    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:34.021208    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:34.021213    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:34.021218    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:34 GMT
	I0810 22:33:34.027109    4347 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210810223223-30291","namespace":"kube-system","uid":"8498143e-4386-44bc-9541-3193bd504c1d","resourceVersion":"297","creationTimestamp":"2021-08-10T22:33:25Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.50.32:2379","kubernetes.io/config.hash":"ee4e4232c1192224bf90edfa1030cde5","kubernetes.io/config.mirror":"ee4e4232c1192224bf90edfa1030cde5","kubernetes.io/config.seen":"2021-08-10T22:33:24.968043630Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:33:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash [truncated 5793 chars]
	I0810 22:33:34.033222    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291
	I0810 22:33:34.033243    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:34.033251    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:34.033256    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:34.063731    4347 round_trippers.go:457] Response Status: 200 OK in 30 milliseconds
	I0810 22:33:34.063752    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:34.063758    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:34.063762    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:34.063766    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:34.063773    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:34.063777    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:34 GMT
	I0810 22:33:34.064156    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","resourceVersion":"406","creationTimestamp":"2021-08-10T22:33:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223223-30291","minikube.k8s.io/updated_at":"2021_08_10T22_33_20_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22 [truncated 6553 chars]
	I0810 22:33:34.368394    4347 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0810 22:33:34.441758    4347 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0810 22:33:34.466297    4347 command_runner.go:124] > apiVersion: v1
	I0810 22:33:34.466317    4347 command_runner.go:124] > data:
	I0810 22:33:34.466321    4347 command_runner.go:124] >   Corefile: |
	I0810 22:33:34.466325    4347 command_runner.go:124] >     .:53 {
	I0810 22:33:34.466330    4347 command_runner.go:124] >         errors
	I0810 22:33:34.466335    4347 command_runner.go:124] >         health {
	I0810 22:33:34.466339    4347 command_runner.go:124] >            lameduck 5s
	I0810 22:33:34.466343    4347 command_runner.go:124] >         }
	I0810 22:33:34.466346    4347 command_runner.go:124] >         ready
	I0810 22:33:34.466353    4347 command_runner.go:124] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0810 22:33:34.466357    4347 command_runner.go:124] >            pods insecure
	I0810 22:33:34.466362    4347 command_runner.go:124] >            fallthrough in-addr.arpa ip6.arpa
	I0810 22:33:34.466367    4347 command_runner.go:124] >            ttl 30
	I0810 22:33:34.466371    4347 command_runner.go:124] >         }
	I0810 22:33:34.466375    4347 command_runner.go:124] >         prometheus :9153
	I0810 22:33:34.466380    4347 command_runner.go:124] >         forward . /etc/resolv.conf {
	I0810 22:33:34.466388    4347 command_runner.go:124] >            max_concurrent 1000
	I0810 22:33:34.466391    4347 command_runner.go:124] >         }
	I0810 22:33:34.466395    4347 command_runner.go:124] >         cache 30
	I0810 22:33:34.466400    4347 command_runner.go:124] >         loop
	I0810 22:33:34.466403    4347 command_runner.go:124] >         reload
	I0810 22:33:34.466407    4347 command_runner.go:124] >         loadbalance
	I0810 22:33:34.466411    4347 command_runner.go:124] >     }
	I0810 22:33:34.466415    4347 command_runner.go:124] > kind: ConfigMap
	I0810 22:33:34.466422    4347 command_runner.go:124] > metadata:
	I0810 22:33:34.466428    4347 command_runner.go:124] >   creationTimestamp: "2021-08-10T22:33:19Z"
	I0810 22:33:34.466434    4347 command_runner.go:124] >   name: coredns
	I0810 22:33:34.466439    4347 command_runner.go:124] >   namespace: kube-system
	I0810 22:33:34.466444    4347 command_runner.go:124] >   resourceVersion: "255"
	I0810 22:33:34.466449    4347 command_runner.go:124] >   uid: 4c6f7d11-ffe0-48dd-ab28-31bb819ab94b
	I0810 22:33:34.490421    4347 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0810 22:33:34.565515    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210810223223-30291
	I0810 22:33:34.565543    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:34.565552    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:34.565556    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:34.569085    4347 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:33:34.569101    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:34.569106    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:34.569109    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:34.569112    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:34.569115    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:34.569118    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:34 GMT
	I0810 22:33:34.569785    4347 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210810223223-30291","namespace":"kube-system","uid":"8498143e-4386-44bc-9541-3193bd504c1d","resourceVersion":"297","creationTimestamp":"2021-08-10T22:33:25Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.50.32:2379","kubernetes.io/config.hash":"ee4e4232c1192224bf90edfa1030cde5","kubernetes.io/config.mirror":"ee4e4232c1192224bf90edfa1030cde5","kubernetes.io/config.seen":"2021-08-10T22:33:24.968043630Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:33:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash [truncated 5793 chars]
	I0810 22:33:34.570092    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291
	I0810 22:33:34.570103    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:34.570108    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:34.570112    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:34.573028    4347 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:33:34.573043    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:34.573047    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:34.573051    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:34.573054    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:34.573057    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:34.573060    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:34 GMT
	I0810 22:33:34.574042    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","resourceVersion":"406","creationTimestamp":"2021-08-10T22:33:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223223-30291","minikube.k8s.io/updated_at":"2021_08_10T22_33_20_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22 [truncated 6553 chars]
	I0810 22:33:35.064683    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210810223223-30291
	I0810 22:33:35.064709    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:35.064715    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:35.064720    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:35.067614    4347 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:33:35.067631    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:35.067635    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:35 GMT
	I0810 22:33:35.067638    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:35.067641    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:35.067644    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:35.067647    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:35.068032    4347 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210810223223-30291","namespace":"kube-system","uid":"8498143e-4386-44bc-9541-3193bd504c1d","resourceVersion":"297","creationTimestamp":"2021-08-10T22:33:25Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.50.32:2379","kubernetes.io/config.hash":"ee4e4232c1192224bf90edfa1030cde5","kubernetes.io/config.mirror":"ee4e4232c1192224bf90edfa1030cde5","kubernetes.io/config.seen":"2021-08-10T22:33:24.968043630Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:33:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash [truncated 5793 chars]
	I0810 22:33:35.068357    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291
	I0810 22:33:35.068370    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:35.068375    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:35.068380    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:35.070355    4347 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:33:35.070372    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:35.070377    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:35.070382    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:35 GMT
	I0810 22:33:35.070387    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:35.070391    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:35.070396    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:35.070774    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","resourceVersion":"406","creationTimestamp":"2021-08-10T22:33:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223223-30291","minikube.k8s.io/updated_at":"2021_08_10T22_33_20_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22 [truncated 6553 chars]
	I0810 22:33:35.565447    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210810223223-30291
	I0810 22:33:35.565470    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:35.565476    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:35.565480    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:35.568430    4347 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:33:35.568455    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:35.568462    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:35.568467    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:35.568471    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:35.568476    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:35.568480    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:35 GMT
	I0810 22:33:35.569359    4347 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210810223223-30291","namespace":"kube-system","uid":"8498143e-4386-44bc-9541-3193bd504c1d","resourceVersion":"297","creationTimestamp":"2021-08-10T22:33:25Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.50.32:2379","kubernetes.io/config.hash":"ee4e4232c1192224bf90edfa1030cde5","kubernetes.io/config.mirror":"ee4e4232c1192224bf90edfa1030cde5","kubernetes.io/config.seen":"2021-08-10T22:33:24.968043630Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:33:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash [truncated 5793 chars]
	I0810 22:33:35.569799    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291
	I0810 22:33:35.569818    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:35.569825    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:35.569835    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:35.571756    4347 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:33:35.571771    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:35.571775    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:35.571779    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:35.571782    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:35.571790    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:35.571800    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:35 GMT
	I0810 22:33:35.572191    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","resourceVersion":"406","creationTimestamp":"2021-08-10T22:33:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223223-30291","minikube.k8s.io/updated_at":"2021_08_10T22_33_20_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22 [truncated 6553 chars]
	I0810 22:33:36.049911    4347 command_runner.go:124] > serviceaccount/storage-provisioner created
	I0810 22:33:36.065181    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210810223223-30291
	I0810 22:33:36.065209    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:36.065217    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:36.065223    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:36.067823    4347 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:33:36.067848    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:36.067856    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:36.067862    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:36.067868    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:36 GMT
	I0810 22:33:36.067873    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:36.067879    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:36.068359    4347 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210810223223-30291","namespace":"kube-system","uid":"8498143e-4386-44bc-9541-3193bd504c1d","resourceVersion":"297","creationTimestamp":"2021-08-10T22:33:25Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.50.32:2379","kubernetes.io/config.hash":"ee4e4232c1192224bf90edfa1030cde5","kubernetes.io/config.mirror":"ee4e4232c1192224bf90edfa1030cde5","kubernetes.io/config.seen":"2021-08-10T22:33:24.968043630Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:33:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash [truncated 5793 chars]
	I0810 22:33:36.068786    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291
	I0810 22:33:36.068810    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:36.068817    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:36.068825    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:36.072138    4347 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:33:36.072159    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:36.072166    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:36.072172    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:36.072177    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:36.072206    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:36.072212    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:36 GMT
	I0810 22:33:36.072864    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","resourceVersion":"406","creationTimestamp":"2021-08-10T22:33:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223223-30291","minikube.k8s.io/updated_at":"2021_08_10T22_33_20_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22 [truncated 6553 chars]
	I0810 22:33:36.073159    4347 pod_ready.go:102] pod "etcd-multinode-20210810223223-30291" in "kube-system" namespace has status "Ready":"False"
	I0810 22:33:36.081380    4347 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0810 22:33:36.095147    4347 command_runner.go:124] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0810 22:33:36.126134    4347 command_runner.go:124] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0810 22:33:36.169525    4347 command_runner.go:124] > endpoints/k8s.io-minikube-hostpath created
	I0810 22:33:36.196944    4347 command_runner.go:124] > pod/storage-provisioner created
	I0810 22:33:36.201365    4347 command_runner.go:124] > storageclass.storage.k8s.io/standard created
	I0810 22:33:36.201402    4347 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.7596142s)
	I0810 22:33:36.201437    4347 main.go:130] libmachine: Making call to close driver server
	I0810 22:33:36.201458    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .Close
	I0810 22:33:36.201485    4347 command_runner.go:124] > configmap/coredns replaced
	I0810 22:33:36.201524    4347 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.711063544s)
	I0810 22:33:36.201552    4347 start.go:736] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS
	I0810 22:33:36.201774    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | Closing plugin on server side
	I0810 22:33:36.201787    4347 main.go:130] libmachine: Successfully made call to close driver server
	I0810 22:33:36.201805    4347 main.go:130] libmachine: Making call to close connection to plugin binary
	I0810 22:33:36.201815    4347 main.go:130] libmachine: Making call to close driver server
	I0810 22:33:36.201824    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .Close
	I0810 22:33:36.201891    4347 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.833456644s)
	I0810 22:33:36.201918    4347 main.go:130] libmachine: Making call to close driver server
	I0810 22:33:36.201928    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .Close
	I0810 22:33:36.202049    4347 main.go:130] libmachine: Successfully made call to close driver server
	I0810 22:33:36.202065    4347 main.go:130] libmachine: Making call to close connection to plugin binary
	I0810 22:33:36.202078    4347 main.go:130] libmachine: Making call to close driver server
	I0810 22:33:36.202088    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .Close
	I0810 22:33:36.202241    4347 main.go:130] libmachine: Successfully made call to close driver server
	I0810 22:33:36.202286    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | Closing plugin on server side
	I0810 22:33:36.202288    4347 main.go:130] libmachine: Making call to close connection to plugin binary
	I0810 22:33:36.202338    4347 main.go:130] libmachine: Making call to close driver server
	I0810 22:33:36.202349    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .Close
	I0810 22:33:36.202430    4347 main.go:130] libmachine: Successfully made call to close driver server
	I0810 22:33:36.202478    4347 main.go:130] libmachine: Making call to close connection to plugin binary
	I0810 22:33:36.202436    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | Closing plugin on server side
	I0810 22:33:36.202606    4347 main.go:130] libmachine: Successfully made call to close driver server
	I0810 22:33:36.202652    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | Closing plugin on server side
	I0810 22:33:36.202656    4347 main.go:130] libmachine: Making call to close connection to plugin binary
	I0810 22:33:36.205016    4347 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0810 22:33:36.205046    4347 addons.go:344] enableAddons completed in 2.283766308s
	I0810 22:33:36.565144    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210810223223-30291
	I0810 22:33:36.565181    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:36.565188    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:36.565194    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:36.568817    4347 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:33:36.568842    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:36.568849    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:36 GMT
	I0810 22:33:36.568854    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:36.568858    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:36.568863    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:36.568867    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:36.569488    4347 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210810223223-30291","namespace":"kube-system","uid":"8498143e-4386-44bc-9541-3193bd504c1d","resourceVersion":"297","creationTimestamp":"2021-08-10T22:33:25Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.50.32:2379","kubernetes.io/config.hash":"ee4e4232c1192224bf90edfa1030cde5","kubernetes.io/config.mirror":"ee4e4232c1192224bf90edfa1030cde5","kubernetes.io/config.seen":"2021-08-10T22:33:24.968043630Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:33:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash [truncated 5793 chars]
	I0810 22:33:36.569821    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291
	I0810 22:33:36.569835    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:36.569842    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:36.569848    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:36.574427    4347 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0810 22:33:36.574450    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:36.574456    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:36.574461    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:36.574466    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:36.574470    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:36.574475    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:36 GMT
	I0810 22:33:36.575222    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","resourceVersion":"406","creationTimestamp":"2021-08-10T22:33:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223223-30291","minikube.k8s.io/updated_at":"2021_08_10T22_33_20_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22 [truncated 6553 chars]
	I0810 22:33:37.064904    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210810223223-30291
	I0810 22:33:37.064934    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:37.064941    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:37.064946    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:37.069230    4347 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0810 22:33:37.069248    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:37.069259    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:37.069265    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:37.069270    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:37.069274    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:37.069278    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:37 GMT
	I0810 22:33:37.069958    4347 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210810223223-30291","namespace":"kube-system","uid":"8498143e-4386-44bc-9541-3193bd504c1d","resourceVersion":"297","creationTimestamp":"2021-08-10T22:33:25Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.50.32:2379","kubernetes.io/config.hash":"ee4e4232c1192224bf90edfa1030cde5","kubernetes.io/config.mirror":"ee4e4232c1192224bf90edfa1030cde5","kubernetes.io/config.seen":"2021-08-10T22:33:24.968043630Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:33:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash [truncated 5793 chars]
	I0810 22:33:37.070283    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291
	I0810 22:33:37.070296    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:37.070302    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:37.070306    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:37.073821    4347 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:33:37.073837    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:37.073843    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:37.073848    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:37.073852    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:37.073856    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:37.073860    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:37 GMT
	I0810 22:33:37.074059    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","resourceVersion":"406","creationTimestamp":"2021-08-10T22:33:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223223-30291","minikube.k8s.io/updated_at":"2021_08_10T22_33_20_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22 [truncated 6553 chars]
	I0810 22:33:37.564822    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210810223223-30291
	I0810 22:33:37.564865    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:37.564878    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:37.564886    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:37.568039    4347 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:33:37.568056    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:37.568064    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:37.568069    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:37 GMT
	I0810 22:33:37.568074    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:37.568079    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:37.568083    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:37.568598    4347 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210810223223-30291","namespace":"kube-system","uid":"8498143e-4386-44bc-9541-3193bd504c1d","resourceVersion":"297","creationTimestamp":"2021-08-10T22:33:25Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.50.32:2379","kubernetes.io/config.hash":"ee4e4232c1192224bf90edfa1030cde5","kubernetes.io/config.mirror":"ee4e4232c1192224bf90edfa1030cde5","kubernetes.io/config.seen":"2021-08-10T22:33:24.968043630Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:33:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash [truncated 5793 chars]
	I0810 22:33:37.568987    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291
	I0810 22:33:37.569005    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:37.569011    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:37.569014    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:37.572787    4347 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:33:37.572799    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:37.572802    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:37.572806    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:37.572809    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:37.572812    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:37.572816    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:37 GMT
	I0810 22:33:37.573277    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","resourceVersion":"406","creationTimestamp":"2021-08-10T22:33:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223223-30291","minikube.k8s.io/updated_at":"2021_08_10T22_33_20_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22 [truncated 6553 chars]
	I0810 22:33:38.064889    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210810223223-30291
	I0810 22:33:38.064914    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:38.064921    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:38.064925    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:38.071370    4347 round_trippers.go:457] Response Status: 200 OK in 6 milliseconds
	I0810 22:33:38.071391    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:38.071397    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:38 GMT
	I0810 22:33:38.071402    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:38.071407    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:38.071411    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:38.071416    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:38.071608    4347 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210810223223-30291","namespace":"kube-system","uid":"8498143e-4386-44bc-9541-3193bd504c1d","resourceVersion":"481","creationTimestamp":"2021-08-10T22:33:25Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.50.32:2379","kubernetes.io/config.hash":"ee4e4232c1192224bf90edfa1030cde5","kubernetes.io/config.mirror":"ee4e4232c1192224bf90edfa1030cde5","kubernetes.io/config.seen":"2021-08-10T22:33:24.968043630Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:33:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash [truncated 5792 chars]
	I0810 22:33:38.071922    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291
	I0810 22:33:38.071936    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:38.071941    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:38.071945    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:38.080215    4347 round_trippers.go:457] Response Status: 200 OK in 8 milliseconds
	I0810 22:33:38.080236    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:38.080242    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:38 GMT
	I0810 22:33:38.080245    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:38.080248    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:38.080251    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:38.080254    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:38.081469    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","resourceVersion":"406","creationTimestamp":"2021-08-10T22:33:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223223-30291","minikube.k8s.io/updated_at":"2021_08_10T22_33_20_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22 [truncated 6553 chars]
	I0810 22:33:38.081711    4347 pod_ready.go:102] pod "etcd-multinode-20210810223223-30291" in "kube-system" namespace has status "Ready":"False"
	I0810 22:33:38.565120    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210810223223-30291
	I0810 22:33:38.565144    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:38.565150    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:38.565157    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:38.568736    4347 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:33:38.568759    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:38.568765    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:38.568768    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:38 GMT
	I0810 22:33:38.568771    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:38.568775    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:38.568778    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:38.568867    4347 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210810223223-30291","namespace":"kube-system","uid":"8498143e-4386-44bc-9541-3193bd504c1d","resourceVersion":"489","creationTimestamp":"2021-08-10T22:33:25Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.50.32:2379","kubernetes.io/config.hash":"ee4e4232c1192224bf90edfa1030cde5","kubernetes.io/config.mirror":"ee4e4232c1192224bf90edfa1030cde5","kubernetes.io/config.seen":"2021-08-10T22:33:24.968043630Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:33:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash [truncated 5569 chars]
	I0810 22:33:38.569211    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291
	I0810 22:33:38.569227    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:38.569232    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:38.569236    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:38.571624    4347 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:33:38.571642    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:38.571647    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:38.571652    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:38.571657    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:38.571661    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:38.571666    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:38 GMT
	I0810 22:33:38.571821    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","resourceVersion":"406","creationTimestamp":"2021-08-10T22:33:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223223-30291","minikube.k8s.io/updated_at":"2021_08_10T22_33_20_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22 [truncated 6553 chars]
	I0810 22:33:38.572071    4347 pod_ready.go:92] pod "etcd-multinode-20210810223223-30291" in "kube-system" namespace has status "Ready":"True"
	I0810 22:33:38.572090    4347 pod_ready.go:81] duration metric: took 4.563191922s waiting for pod "etcd-multinode-20210810223223-30291" in "kube-system" namespace to be "Ready" ...
	I0810 22:33:38.572106    4347 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-20210810223223-30291" in "kube-system" namespace to be "Ready" ...
	I0810 22:33:38.572193    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20210810223223-30291
	I0810 22:33:38.572205    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:38.572211    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:38.572215    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:38.573981    4347 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:33:38.573997    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:38.574003    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:38.574008    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:38.574012    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:38.574016    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:38.574020    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:38 GMT
	I0810 22:33:38.574199    4347 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20210810223223-30291","namespace":"kube-system","uid":"1c83d52d-8a08-42be-9c8a-6420a1bdb75c","resourceVersion":"317","creationTimestamp":"2021-08-10T22:33:17Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.50.32:8443","kubernetes.io/config.hash":"9099813bef5425d688516ac434247f4d","kubernetes.io/config.mirror":"9099813bef5425d688516ac434247f4d","kubernetes.io/config.seen":"2021-08-10T22:33:07.454085484Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:33:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address [truncated 7249 chars]
	I0810 22:33:38.574503    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291
	I0810 22:33:38.574515    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:38.574520    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:38.574524    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:38.576855    4347 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:33:38.576871    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:38.576877    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:38.576882    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:38.576886    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:38.576891    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:38.576895    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:38 GMT
	I0810 22:33:38.577455    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","resourceVersion":"406","creationTimestamp":"2021-08-10T22:33:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223223-30291","minikube.k8s.io/updated_at":"2021_08_10T22_33_20_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22 [truncated 6553 chars]
	I0810 22:33:38.577692    4347 pod_ready.go:92] pod "kube-apiserver-multinode-20210810223223-30291" in "kube-system" namespace has status "Ready":"True"
	I0810 22:33:38.577708    4347 pod_ready.go:81] duration metric: took 5.570205ms waiting for pod "kube-apiserver-multinode-20210810223223-30291" in "kube-system" namespace to be "Ready" ...
	I0810 22:33:38.577721    4347 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-20210810223223-30291" in "kube-system" namespace to be "Ready" ...
	I0810 22:33:38.577772    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210810223223-30291
	I0810 22:33:38.577783    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:38.577790    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:38.577795    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:38.579574    4347 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:33:38.579590    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:38.579596    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:38 GMT
	I0810 22:33:38.579600    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:38.579605    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:38.579609    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:38.579614    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:38.579888    4347 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20210810223223-30291","namespace":"kube-system","uid":"9305e895-2f70-44a4-8319-6f50b7e7a0ce","resourceVersion":"456","creationTimestamp":"2021-08-10T22:33:25Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"77761625d867cf54e5130d9def04b55c","kubernetes.io/config.mirror":"77761625d867cf54e5130d9def04b55c","kubernetes.io/config.seen":"2021-08-10T22:33:24.968061293Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:33:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi
g.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config.s [truncated 6810 chars]
	I0810 22:33:38.580216    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291
	I0810 22:33:38.580229    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:38.580235    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:38.580239    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:38.581845    4347 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:33:38.581859    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:38.581865    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:38.581870    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:38.581875    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:38.581880    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:38.581884    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:38 GMT
	I0810 22:33:38.582065    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","resourceVersion":"406","creationTimestamp":"2021-08-10T22:33:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223223-30291","minikube.k8s.io/updated_at":"2021_08_10T22_33_20_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22 [truncated 6553 chars]
	I0810 22:33:38.582374    4347 pod_ready.go:92] pod "kube-controller-manager-multinode-20210810223223-30291" in "kube-system" namespace has status "Ready":"True"
	I0810 22:33:38.582395    4347 pod_ready.go:81] duration metric: took 4.663344ms waiting for pod "kube-controller-manager-multinode-20210810223223-30291" in "kube-system" namespace to be "Ready" ...
	I0810 22:33:38.582408    4347 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lmhw9" in "kube-system" namespace to be "Ready" ...
	I0810 22:33:38.582473    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lmhw9
	I0810 22:33:38.582484    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:38.582490    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:38.582498    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:38.584255    4347 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:33:38.584271    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:38.584275    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:38.584279    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:38.584282    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:38.584284    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:38.584287    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:38 GMT
	I0810 22:33:38.584565    4347 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lmhw9","generateName":"kube-proxy-","namespace":"kube-system","uid":"2a10306d-93c9-4aac-b47a-8bd1d406882c","resourceVersion":"470","creationTimestamp":"2021-08-10T22:33:33Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"3eb0224f-214a-4d5e-ba63-b7b722448d21","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:33:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3eb0224f-214a-4d5e-ba63-b7b722448d21\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 5758 chars]
	I0810 22:33:38.584865    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291
	I0810 22:33:38.584880    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:38.584887    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:38.584893    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:38.587199    4347 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:33:38.587215    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:38.587221    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:38.587226    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:38.587230    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:38.587234    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:38.587239    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:38 GMT
	I0810 22:33:38.587537    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","resourceVersion":"406","creationTimestamp":"2021-08-10T22:33:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223223-30291","minikube.k8s.io/updated_at":"2021_08_10T22_33_20_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22 [truncated 6553 chars]
	I0810 22:33:38.587818    4347 pod_ready.go:92] pod "kube-proxy-lmhw9" in "kube-system" namespace has status "Ready":"True"
	I0810 22:33:38.587832    4347 pod_ready.go:81] duration metric: took 5.405358ms waiting for pod "kube-proxy-lmhw9" in "kube-system" namespace to be "Ready" ...
	I0810 22:33:38.587843    4347 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-20210810223223-30291" in "kube-system" namespace to be "Ready" ...
	I0810 22:33:38.587898    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210810223223-30291
	I0810 22:33:38.587908    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:38.587913    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:38.587917    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:38.589724    4347 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:33:38.589763    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:38.589769    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:38 GMT
	I0810 22:33:38.589774    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:38.589779    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:38.589783    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:38.589788    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:38.589907    4347 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20210810223223-30291","namespace":"kube-system","uid":"5a7e6aa0-3e54-4877-a2b0-79df1e84d9f7","resourceVersion":"295","creationTimestamp":"2021-08-10T22:33:25Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"18b06801ebf2048d768b73e098da8a40","kubernetes.io/config.mirror":"18b06801ebf2048d768b73e098da8a40","kubernetes.io/config.seen":"2021-08-10T22:33:24.968063579Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:33:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:ku
bernetes.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labe [truncated 4540 chars]
	I0810 22:33:38.590230    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291
	I0810 22:33:38.590250    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:38.590256    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:38.590262    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:38.596540    4347 round_trippers.go:457] Response Status: 200 OK in 6 milliseconds
	I0810 22:33:38.596556    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:38.596562    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:38 GMT
	I0810 22:33:38.596567    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:38.596571    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:38.596575    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:38.596579    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:38.596674    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","resourceVersion":"406","creationTimestamp":"2021-08-10T22:33:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223223-30291","minikube.k8s.io/updated_at":"2021_08_10T22_33_20_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22 [truncated 6553 chars]
	I0810 22:33:38.596896    4347 pod_ready.go:92] pod "kube-scheduler-multinode-20210810223223-30291" in "kube-system" namespace has status "Ready":"True"
	I0810 22:33:38.596908    4347 pod_ready.go:81] duration metric: took 9.05652ms waiting for pod "kube-scheduler-multinode-20210810223223-30291" in "kube-system" namespace to be "Ready" ...
	I0810 22:33:38.596918    4347 pod_ready.go:38] duration metric: took 4.61496147s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0810 22:33:38.596944    4347 api_server.go:50] waiting for apiserver process to appear ...
	I0810 22:33:38.596999    4347 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0810 22:33:38.613458    4347 command_runner.go:124] > 2634
	I0810 22:33:38.614230    4347 api_server.go:70] duration metric: took 4.692982146s to wait for apiserver process to appear ...
	I0810 22:33:38.614252    4347 api_server.go:86] waiting for apiserver healthz status ...
	I0810 22:33:38.614264    4347 api_server.go:239] Checking apiserver healthz at https://192.168.50.32:8443/healthz ...
	I0810 22:33:38.620248    4347 api_server.go:265] https://192.168.50.32:8443/healthz returned 200:
	ok
	I0810 22:33:38.620382    4347 round_trippers.go:432] GET https://192.168.50.32:8443/version?timeout=32s
	I0810 22:33:38.620394    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:38.620401    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:38.620406    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:38.621322    4347 round_trippers.go:457] Response Status: 200 OK in 0 milliseconds
	I0810 22:33:38.621338    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:38.621344    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:38 GMT
	I0810 22:33:38.621348    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:38.621353    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:38.621357    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:38.621361    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:38.621366    4347 round_trippers.go:463]     Content-Length: 263
	I0810 22:33:38.621486    4347 request.go:1123] Response Body: {
	  "major": "1",
	  "minor": "21",
	  "gitVersion": "v1.21.3",
	  "gitCommit": "ca643a4d1f7bfe34773c74f79527be4afd95bf39",
	  "gitTreeState": "clean",
	  "buildDate": "2021-07-15T20:59:07Z",
	  "goVersion": "go1.16.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0810 22:33:38.621593    4347 api_server.go:139] control plane version: v1.21.3
	I0810 22:33:38.621612    4347 api_server.go:129] duration metric: took 7.353449ms to wait for apiserver health ...
	I0810 22:33:38.621621    4347 system_pods.go:43] waiting for kube-system pods to appear ...
	I0810 22:33:38.765161    4347 request.go:600] Waited for 143.458968ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods
	I0810 22:33:38.765219    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods
	I0810 22:33:38.765225    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:38.765230    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:38.765235    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:38.773379    4347 round_trippers.go:457] Response Status: 200 OK in 8 milliseconds
	I0810 22:33:38.773421    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:38.773428    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:38 GMT
	I0810 22:33:38.773433    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:38.773436    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:38.773439    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:38.773442    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:38.776769    4347 request.go:1123] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"492"},"items":[{"metadata":{"name":"coredns-558bd4d5db-v7x6p","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"0c4eb44b-9d97-4934-aa16-8b8625bf04cf","resourceVersion":"490","creationTimestamp":"2021-08-10T22:33:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"28236a2d-7d69-4771-a778-5fae1cd7d05f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:33:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"28236a2d-7d69-4771-a778-5fae1cd7d05f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller"
:{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k: [truncated 53135 chars]
	I0810 22:33:38.777944    4347 system_pods.go:59] 8 kube-system pods found
	I0810 22:33:38.777979    4347 system_pods.go:61] "coredns-558bd4d5db-v7x6p" [0c4eb44b-9d97-4934-aa16-8b8625bf04cf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0810 22:33:38.777995    4347 system_pods.go:61] "etcd-multinode-20210810223223-30291" [8498143e-4386-44bc-9541-3193bd504c1d] Running
	I0810 22:33:38.778003    4347 system_pods.go:61] "kindnet-2bvdc" [c26b9021-1d86-475c-ac98-6f7e7e07c434] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0810 22:33:38.778010    4347 system_pods.go:61] "kube-apiserver-multinode-20210810223223-30291" [1c83d52d-8a08-42be-9c8a-6420a1bdb75c] Running
	I0810 22:33:38.778014    4347 system_pods.go:61] "kube-controller-manager-multinode-20210810223223-30291" [9305e895-2f70-44a4-8319-6f50b7e7a0ce] Running
	I0810 22:33:38.778018    4347 system_pods.go:61] "kube-proxy-lmhw9" [2a10306d-93c9-4aac-b47a-8bd1d406882c] Running
	I0810 22:33:38.778022    4347 system_pods.go:61] "kube-scheduler-multinode-20210810223223-30291" [5a7e6aa0-3e54-4877-a2b0-79df1e84d9f7] Running
	I0810 22:33:38.778029    4347 system_pods.go:61] "storage-provisioner" [af946d1d-fa19-47fa-8c83-fd1d06a0e788] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0810 22:33:38.778034    4347 system_pods.go:74] duration metric: took 156.4074ms to wait for pod list to return data ...
	I0810 22:33:38.778043    4347 default_sa.go:34] waiting for default service account to be created ...
	I0810 22:33:38.965387    4347 request.go:600] Waited for 187.27479ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.50.32:8443/api/v1/namespaces/default/serviceaccounts
	I0810 22:33:38.965461    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/namespaces/default/serviceaccounts
	I0810 22:33:38.965467    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:38.965472    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:38.965476    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:38.968510    4347 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:33:38.968533    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:38.968546    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:38.968550    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:38.968553    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:38.968557    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:38.968560    4347 round_trippers.go:463]     Content-Length: 304
	I0810 22:33:38.968566    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:38 GMT
	I0810 22:33:38.968584    4347 request.go:1123] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"492"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"9c37cf8a-c605-4344-9101-164c34e1b236","resourceVersion":"394","creationTimestamp":"2021-08-10T22:33:33Z"},"secrets":[{"name":"default-token-pfsbc"}]}]}
	I0810 22:33:38.969287    4347 default_sa.go:45] found service account: "default"
	I0810 22:33:38.969308    4347 default_sa.go:55] duration metric: took 191.259057ms for default service account to be created ...
	I0810 22:33:38.969315    4347 system_pods.go:116] waiting for k8s-apps to be running ...
	I0810 22:33:39.165137    4347 request.go:600] Waited for 195.745514ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods
	I0810 22:33:39.165213    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods
	I0810 22:33:39.165219    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:39.165224    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:39.165228    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:39.168975    4347 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:33:39.168994    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:39.169000    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:39 GMT
	I0810 22:33:39.169004    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:39.169007    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:39.169010    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:39.169013    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:39.170845    4347 request.go:1123] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"497"},"items":[{"metadata":{"name":"coredns-558bd4d5db-v7x6p","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"0c4eb44b-9d97-4934-aa16-8b8625bf04cf","resourceVersion":"493","creationTimestamp":"2021-08-10T22:33:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"28236a2d-7d69-4771-a778-5fae1cd7d05f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:33:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"28236a2d-7d69-4771-a778-5fae1cd7d05f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller"
:{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k: [truncated 52906 chars]
	I0810 22:33:39.172108    4347 system_pods.go:86] 8 kube-system pods found
	I0810 22:33:39.172151    4347 system_pods.go:89] "coredns-558bd4d5db-v7x6p" [0c4eb44b-9d97-4934-aa16-8b8625bf04cf] Running
	I0810 22:33:39.172160    4347 system_pods.go:89] "etcd-multinode-20210810223223-30291" [8498143e-4386-44bc-9541-3193bd504c1d] Running
	I0810 22:33:39.172168    4347 system_pods.go:89] "kindnet-2bvdc" [c26b9021-1d86-475c-ac98-6f7e7e07c434] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0810 22:33:39.172180    4347 system_pods.go:89] "kube-apiserver-multinode-20210810223223-30291" [1c83d52d-8a08-42be-9c8a-6420a1bdb75c] Running
	I0810 22:33:39.172185    4347 system_pods.go:89] "kube-controller-manager-multinode-20210810223223-30291" [9305e895-2f70-44a4-8319-6f50b7e7a0ce] Running
	I0810 22:33:39.172188    4347 system_pods.go:89] "kube-proxy-lmhw9" [2a10306d-93c9-4aac-b47a-8bd1d406882c] Running
	I0810 22:33:39.172192    4347 system_pods.go:89] "kube-scheduler-multinode-20210810223223-30291" [5a7e6aa0-3e54-4877-a2b0-79df1e84d9f7] Running
	I0810 22:33:39.172201    4347 system_pods.go:89] "storage-provisioner" [af946d1d-fa19-47fa-8c83-fd1d06a0e788] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0810 22:33:39.172227    4347 system_pods.go:126] duration metric: took 202.907364ms to wait for k8s-apps to be running ...
	I0810 22:33:39.172234    4347 system_svc.go:44] waiting for kubelet service to be running ....
	I0810 22:33:39.172279    4347 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0810 22:33:39.184422    4347 system_svc.go:56] duration metric: took 12.179239ms WaitForService to wait for kubelet.
	I0810 22:33:39.184441    4347 kubeadm.go:547] duration metric: took 5.263197859s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0810 22:33:39.184465    4347 node_conditions.go:102] verifying NodePressure condition ...
	I0810 22:33:39.365877    4347 request.go:600] Waited for 181.331919ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.50.32:8443/api/v1/nodes
	I0810 22:33:39.365947    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes
	I0810 22:33:39.365955    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:39.365962    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:39.365977    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:39.369263    4347 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:33:39.369285    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:39.369291    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:39.369294    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:39.369297    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:39.369300    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:39.369303    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:39 GMT
	I0810 22:33:39.369780    4347 request.go:1123] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"497"},"items":[{"metadata":{"name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","resourceVersion":"406","creationTimestamp":"2021-08-10T22:33:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223223-30291","minikube.k8s.io/updated_at":"2021_08_10T22_33_20_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed
-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operatio [truncated 6606 chars]
	I0810 22:33:39.370809    4347 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0810 22:33:39.370836    4347 node_conditions.go:123] node cpu capacity is 2
	I0810 22:33:39.370896    4347 node_conditions.go:105] duration metric: took 186.425566ms to run NodePressure ...
	I0810 22:33:39.370910    4347 start.go:231] waiting for startup goroutines ...
	I0810 22:33:39.373183    4347 out.go:177] 
	I0810 22:33:39.373458    4347 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/config.json ...
	I0810 22:33:39.375290    4347 out.go:177] * Starting node multinode-20210810223223-30291-m02 in cluster multinode-20210810223223-30291
	I0810 22:33:39.375311    4347 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0810 22:33:39.375324    4347 cache.go:56] Caching tarball of preloaded images
	I0810 22:33:39.375468    4347 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0810 22:33:39.375488    4347 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on crio
	I0810 22:33:39.375558    4347 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/config.json ...
	I0810 22:33:39.375692    4347 cache.go:205] Successfully downloaded all kic artifacts
	I0810 22:33:39.375716    4347 start.go:313] acquiring machines lock for multinode-20210810223223-30291-m02: {Name:mk9647f7c84b24381af0d3e731fd883065efc3b8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0810 22:33:39.375768    4347 start.go:317] acquired machines lock for "multinode-20210810223223-30291-m02" in 38.125µs
	I0810 22:33:39.375787    4347 start.go:89] Provisioning new machine with config: &{Name:multinode-20210810223223-30291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12122/minikube-v1.22.0-1628238775-12122.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 C
lusterName:multinode-20210810223223-30291 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.32 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.21.3 ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0} &{Name:m02 IP: Port:0 KubernetesVersion:v1.21.3 ControlPlane:false Wo
rker:true}
	I0810 22:33:39.375843    4347 start.go:126] createHost starting for "m02" (driver="kvm2")
	I0810 22:33:39.377535    4347 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0810 22:33:39.377656    4347 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:33:39.377692    4347 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:33:39.389071    4347 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:38337
	I0810 22:33:39.389528    4347 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:33:39.390013    4347 main.go:130] libmachine: Using API Version  1
	I0810 22:33:39.390036    4347 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:33:39.390344    4347 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:33:39.390529    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetMachineName
	I0810 22:33:39.390661    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .DriverName
	I0810 22:33:39.390822    4347 start.go:160] libmachine.API.Create for "multinode-20210810223223-30291" (driver="kvm2")
	I0810 22:33:39.390850    4347 client.go:168] LocalClient.Create starting
	I0810 22:33:39.390876    4347 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem
	I0810 22:33:39.390902    4347 main.go:130] libmachine: Decoding PEM data...
	I0810 22:33:39.390921    4347 main.go:130] libmachine: Parsing certificate...
	I0810 22:33:39.391039    4347 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem
	I0810 22:33:39.391056    4347 main.go:130] libmachine: Decoding PEM data...
	I0810 22:33:39.391067    4347 main.go:130] libmachine: Parsing certificate...
	I0810 22:33:39.391123    4347 main.go:130] libmachine: Running pre-create checks...
	I0810 22:33:39.391136    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .PreCreateCheck
	I0810 22:33:39.391310    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetConfigRaw
	I0810 22:33:39.391764    4347 main.go:130] libmachine: Creating machine...
	I0810 22:33:39.391779    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .Create
	I0810 22:33:39.391915    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Creating KVM machine...
	I0810 22:33:39.394529    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | found existing default KVM network
	I0810 22:33:39.394714    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | found existing private KVM network mk-multinode-20210810223223-30291
	I0810 22:33:39.394802    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Setting up store path in /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223223-30291-m02 ...
	I0810 22:33:39.394828    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Building disk image from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/iso/minikube-v1.22.0-1628238775-12122.iso
	I0810 22:33:39.394910    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | I0810 22:33:39.394787    4625 common.go:108] Making disk image using store path: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	I0810 22:33:39.394996    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Downloading /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/iso/minikube-v1.22.0-1628238775-12122.iso...
	I0810 22:33:39.591851    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | I0810 22:33:39.591730    4625 common.go:115] Creating ssh key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223223-30291-m02/id_rsa...
	I0810 22:33:39.872490    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | I0810 22:33:39.872342    4625 common.go:121] Creating raw disk image: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223223-30291-m02/multinode-20210810223223-30291-m02.rawdisk...
	I0810 22:33:39.872536    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | Writing magic tar header
	I0810 22:33:39.872605    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | Writing SSH key tar header
	I0810 22:33:39.872662    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | I0810 22:33:39.872483    4625 common.go:135] Fixing permissions on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223223-30291-m02 ...
	I0810 22:33:39.872729    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223223-30291-m02
	I0810 22:33:39.872765    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223223-30291-m02 (perms=drwx------)
	I0810 22:33:39.872792    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines (perms=drwxr-xr-x)
	I0810 22:33:39.872819    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines
	I0810 22:33:39.872840    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube (perms=drwxr-xr-x)
	I0810 22:33:39.872862    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0 (perms=drwxr-xr-x)
	I0810 22:33:39.872879    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxr-xr-x)
	I0810 22:33:39.872894    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0810 22:33:39.872910    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	I0810 22:33:39.872930    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0
	I0810 22:33:39.872956    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0810 22:33:39.872969    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Creating domain...
	I0810 22:33:39.872990    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | Checking permissions on dir: /home/jenkins
	I0810 22:33:39.873011    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | Checking permissions on dir: /home
	I0810 22:33:39.873029    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | Skipping /home - not owner
	I0810 22:33:39.897657    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:fb:77:5c in network default
	I0810 22:33:39.898150    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Ensuring networks are active...
	I0810 22:33:39.898180    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:39.900225    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Ensuring network default is active
	I0810 22:33:39.900536    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Ensuring network mk-multinode-20210810223223-30291 is active
	I0810 22:33:39.900871    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Getting domain xml...
	I0810 22:33:39.902635    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Creating domain...
	I0810 22:33:40.317605    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Waiting to get IP...
	I0810 22:33:40.318388    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:40.318898    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | unable to find current IP address of domain multinode-20210810223223-30291-m02 in network mk-multinode-20210810223223-30291
	I0810 22:33:40.318927    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | I0810 22:33:40.318855    4625 retry.go:31] will retry after 263.082536ms: waiting for machine to come up
	I0810 22:33:40.583240    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:40.583817    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | unable to find current IP address of domain multinode-20210810223223-30291-m02 in network mk-multinode-20210810223223-30291
	I0810 22:33:40.583844    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | I0810 22:33:40.583766    4625 retry.go:31] will retry after 381.329545ms: waiting for machine to come up
	I0810 22:33:40.966355    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:40.966758    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | unable to find current IP address of domain multinode-20210810223223-30291-m02 in network mk-multinode-20210810223223-30291
	I0810 22:33:40.966780    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | I0810 22:33:40.966730    4625 retry.go:31] will retry after 422.765636ms: waiting for machine to come up
	I0810 22:33:41.391252    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:41.391751    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | unable to find current IP address of domain multinode-20210810223223-30291-m02 in network mk-multinode-20210810223223-30291
	I0810 22:33:41.391784    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | I0810 22:33:41.391698    4625 retry.go:31] will retry after 473.074753ms: waiting for machine to come up
	I0810 22:33:41.866200    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:41.866699    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | unable to find current IP address of domain multinode-20210810223223-30291-m02 in network mk-multinode-20210810223223-30291
	I0810 22:33:41.866723    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | I0810 22:33:41.866659    4625 retry.go:31] will retry after 587.352751ms: waiting for machine to come up
	I0810 22:33:42.455304    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:42.455729    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | unable to find current IP address of domain multinode-20210810223223-30291-m02 in network mk-multinode-20210810223223-30291
	I0810 22:33:42.455757    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | I0810 22:33:42.455675    4625 retry.go:31] will retry after 834.206799ms: waiting for machine to come up
	I0810 22:33:43.291548    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:43.292039    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | unable to find current IP address of domain multinode-20210810223223-30291-m02 in network mk-multinode-20210810223223-30291
	I0810 22:33:43.292066    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | I0810 22:33:43.291996    4625 retry.go:31] will retry after 746.553905ms: waiting for machine to come up
	I0810 22:33:44.039818    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:44.040291    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | unable to find current IP address of domain multinode-20210810223223-30291-m02 in network mk-multinode-20210810223223-30291
	I0810 22:33:44.040323    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | I0810 22:33:44.040247    4625 retry.go:31] will retry after 987.362415ms: waiting for machine to come up
	I0810 22:33:45.028879    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:45.029382    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | unable to find current IP address of domain multinode-20210810223223-30291-m02 in network mk-multinode-20210810223223-30291
	I0810 22:33:45.029407    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | I0810 22:33:45.029327    4625 retry.go:31] will retry after 1.189835008s: waiting for machine to come up
	I0810 22:33:46.220158    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:46.220603    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | unable to find current IP address of domain multinode-20210810223223-30291-m02 in network mk-multinode-20210810223223-30291
	I0810 22:33:46.220627    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | I0810 22:33:46.220559    4625 retry.go:31] will retry after 1.677229867s: waiting for machine to come up
	I0810 22:33:47.900417    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:47.900951    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | unable to find current IP address of domain multinode-20210810223223-30291-m02 in network mk-multinode-20210810223223-30291
	I0810 22:33:47.900989    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | I0810 22:33:47.900884    4625 retry.go:31] will retry after 2.346016261s: waiting for machine to come up
	I0810 22:33:50.247928    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:50.248472    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | unable to find current IP address of domain multinode-20210810223223-30291-m02 in network mk-multinode-20210810223223-30291
	I0810 22:33:50.248503    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | I0810 22:33:50.248417    4625 retry.go:31] will retry after 3.36678925s: waiting for machine to come up
	I0810 22:33:53.618810    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:53.619341    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Found IP for machine: 192.168.50.251
	I0810 22:33:53.619376    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has current primary IP address 192.168.50.251 and MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:53.619392    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Reserving static IP address...
	I0810 22:33:53.619739    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | unable to find host DHCP lease matching {name: "multinode-20210810223223-30291-m02", mac: "52:54:00:5f:3c:a9", ip: "192.168.50.251"} in network mk-multinode-20210810223223-30291
	I0810 22:33:53.667014    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | Getting to WaitForSSH function...
	I0810 22:33:53.667075    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Reserved static IP address: 192.168.50.251
	I0810 22:33:53.667092    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Waiting for SSH to be available...
	I0810 22:33:53.671847    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:53.672357    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:3c:a9", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:33:53 +0000 UTC Type:0 Mac:52:54:00:5f:3c:a9 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5f:3c:a9}
	I0810 22:33:53.672384    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined IP address 192.168.50.251 and MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:53.672460    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | Using SSH client type: external
	I0810 22:33:53.672489    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223223-30291-m02/id_rsa (-rw-------)
	I0810 22:33:53.672572    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.251 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223223-30291-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0810 22:33:53.672598    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | About to run SSH command:
	I0810 22:33:53.672613    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | exit 0
	I0810 22:33:53.811241    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | SSH cmd err, output: <nil>: 
	I0810 22:33:53.811682    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) KVM machine creation complete!
	I0810 22:33:53.811775    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetConfigRaw
	I0810 22:33:53.812368    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .DriverName
	I0810 22:33:53.812553    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .DriverName
	I0810 22:33:53.812713    4347 main.go:130] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0810 22:33:53.812732    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetState
	I0810 22:33:53.815427    4347 main.go:130] libmachine: Detecting operating system of created instance...
	I0810 22:33:53.815441    4347 main.go:130] libmachine: Waiting for SSH to be available...
	I0810 22:33:53.815449    4347 main.go:130] libmachine: Getting to WaitForSSH function...
	I0810 22:33:53.815455    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHHostname
	I0810 22:33:53.819909    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:53.820253    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:3c:a9", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:33:53 +0000 UTC Type:0 Mac:52:54:00:5f:3c:a9 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:multinode-20210810223223-30291-m02 Clientid:01:52:54:00:5f:3c:a9}
	I0810 22:33:53.820275    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined IP address 192.168.50.251 and MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:53.820397    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHPort
	I0810 22:33:53.820569    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHKeyPath
	I0810 22:33:53.820705    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHKeyPath
	I0810 22:33:53.820803    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHUsername
	I0810 22:33:53.820982    4347 main.go:130] libmachine: Using SSH client type: native
	I0810 22:33:53.821136    4347 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0810 22:33:53.821150    4347 main.go:130] libmachine: About to run SSH command:
	exit 0
	I0810 22:33:53.950995    4347 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0810 22:33:53.951017    4347 main.go:130] libmachine: Detecting the provisioner...
	I0810 22:33:53.951026    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHHostname
	I0810 22:33:53.956198    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:53.956520    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:3c:a9", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:33:53 +0000 UTC Type:0 Mac:52:54:00:5f:3c:a9 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:multinode-20210810223223-30291-m02 Clientid:01:52:54:00:5f:3c:a9}
	I0810 22:33:53.956552    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined IP address 192.168.50.251 and MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:53.956672    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHPort
	I0810 22:33:53.956873    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHKeyPath
	I0810 22:33:53.957045    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHKeyPath
	I0810 22:33:53.957186    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHUsername
	I0810 22:33:53.957299    4347 main.go:130] libmachine: Using SSH client type: native
	I0810 22:33:53.957440    4347 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0810 22:33:53.957451    4347 main.go:130] libmachine: About to run SSH command:
	cat /etc/os-release
	I0810 22:33:54.088979    4347 main.go:130] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2020.02.12
	ID=buildroot
	VERSION_ID=2020.02.12
	PRETTY_NAME="Buildroot 2020.02.12"
	
	I0810 22:33:54.089056    4347 main.go:130] libmachine: found compatible host: buildroot
	I0810 22:33:54.089066    4347 main.go:130] libmachine: Provisioning with buildroot...
	I0810 22:33:54.089075    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetMachineName
	I0810 22:33:54.089329    4347 buildroot.go:166] provisioning hostname "multinode-20210810223223-30291-m02"
	I0810 22:33:54.089358    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetMachineName
	I0810 22:33:54.089535    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHHostname
	I0810 22:33:54.094741    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:54.095121    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:3c:a9", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:33:53 +0000 UTC Type:0 Mac:52:54:00:5f:3c:a9 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:multinode-20210810223223-30291-m02 Clientid:01:52:54:00:5f:3c:a9}
	I0810 22:33:54.095161    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined IP address 192.168.50.251 and MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:54.095272    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHPort
	I0810 22:33:54.095467    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHKeyPath
	I0810 22:33:54.095616    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHKeyPath
	I0810 22:33:54.095736    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHUsername
	I0810 22:33:54.095870    4347 main.go:130] libmachine: Using SSH client type: native
	I0810 22:33:54.096067    4347 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0810 22:33:54.096087    4347 main.go:130] libmachine: About to run SSH command:
	sudo hostname multinode-20210810223223-30291-m02 && echo "multinode-20210810223223-30291-m02" | sudo tee /etc/hostname
	I0810 22:33:54.237962    4347 main.go:130] libmachine: SSH cmd err, output: <nil>: multinode-20210810223223-30291-m02
	
	I0810 22:33:54.237992    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHHostname
	I0810 22:33:54.243272    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:54.243647    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:3c:a9", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:33:53 +0000 UTC Type:0 Mac:52:54:00:5f:3c:a9 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:multinode-20210810223223-30291-m02 Clientid:01:52:54:00:5f:3c:a9}
	I0810 22:33:54.243672    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined IP address 192.168.50.251 and MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:54.243836    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHPort
	I0810 22:33:54.244047    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHKeyPath
	I0810 22:33:54.244207    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHKeyPath
	I0810 22:33:54.244333    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHUsername
	I0810 22:33:54.244485    4347 main.go:130] libmachine: Using SSH client type: native
	I0810 22:33:54.244661    4347 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0810 22:33:54.244686    4347 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-20210810223223-30291-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-20210810223223-30291-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-20210810223223-30291-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0810 22:33:54.382698    4347 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0810 22:33:54.382734    4347 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem ServerCertRemotePath:/etc/
docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube}
	I0810 22:33:54.382752    4347 buildroot.go:174] setting up certificates
	I0810 22:33:54.382761    4347 provision.go:83] configureAuth start
	I0810 22:33:54.382770    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetMachineName
	I0810 22:33:54.383080    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetIP
	I0810 22:33:54.388135    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:54.388480    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:3c:a9", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:33:53 +0000 UTC Type:0 Mac:52:54:00:5f:3c:a9 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:multinode-20210810223223-30291-m02 Clientid:01:52:54:00:5f:3c:a9}
	I0810 22:33:54.388521    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined IP address 192.168.50.251 and MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:54.388699    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHHostname
	I0810 22:33:54.392730    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:54.393072    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:3c:a9", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:33:53 +0000 UTC Type:0 Mac:52:54:00:5f:3c:a9 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:multinode-20210810223223-30291-m02 Clientid:01:52:54:00:5f:3c:a9}
	I0810 22:33:54.393101    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined IP address 192.168.50.251 and MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:54.393214    4347 provision.go:137] copyHostCerts
	I0810 22:33:54.393261    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem
	I0810 22:33:54.393292    4347 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem, removing ...
	I0810 22:33:54.393302    4347 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem
	I0810 22:33:54.393365    4347 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem (1082 bytes)
	I0810 22:33:54.393456    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem
	I0810 22:33:54.393474    4347 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem, removing ...
	I0810 22:33:54.393480    4347 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem
	I0810 22:33:54.393500    4347 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem (1123 bytes)
	I0810 22:33:54.393551    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem
	I0810 22:33:54.393567    4347 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem, removing ...
	I0810 22:33:54.393579    4347 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem
	I0810 22:33:54.393598    4347 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem (1679 bytes)
	I0810 22:33:54.393650    4347 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem org=jenkins.multinode-20210810223223-30291-m02 san=[192.168.50.251 192.168.50.251 localhost 127.0.0.1 minikube multinode-20210810223223-30291-m02]
	I0810 22:33:54.552230    4347 provision.go:171] copyRemoteCerts
	I0810 22:33:54.552289    4347 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0810 22:33:54.552317    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHHostname
	I0810 22:33:54.558060    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:54.558430    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:3c:a9", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:33:53 +0000 UTC Type:0 Mac:52:54:00:5f:3c:a9 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:multinode-20210810223223-30291-m02 Clientid:01:52:54:00:5f:3c:a9}
	I0810 22:33:54.558464    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined IP address 192.168.50.251 and MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:54.558579    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHPort
	I0810 22:33:54.558782    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHKeyPath
	I0810 22:33:54.558948    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHUsername
	I0810 22:33:54.559117    4347 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223223-30291-m02/id_rsa Username:docker}
	I0810 22:33:54.650917    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0810 22:33:54.650988    4347 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0810 22:33:54.667389    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0810 22:33:54.667439    4347 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem --> /etc/docker/server.pem (1273 bytes)
	I0810 22:33:54.683372    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0810 22:33:54.683410    4347 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0810 22:33:54.699647    4347 provision.go:86] duration metric: configureAuth took 316.874754ms
	I0810 22:33:54.699671    4347 buildroot.go:189] setting minikube options for container-runtime
	I0810 22:33:54.699921    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHHostname
	I0810 22:33:54.705184    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:54.705535    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:3c:a9", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:33:53 +0000 UTC Type:0 Mac:52:54:00:5f:3c:a9 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:multinode-20210810223223-30291-m02 Clientid:01:52:54:00:5f:3c:a9}
	I0810 22:33:54.705562    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined IP address 192.168.50.251 and MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:54.705701    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHPort
	I0810 22:33:54.705876    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHKeyPath
	I0810 22:33:54.706040    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHKeyPath
	I0810 22:33:54.706160    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHUsername
	I0810 22:33:54.706302    4347 main.go:130] libmachine: Using SSH client type: native
	I0810 22:33:54.706440    4347 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0810 22:33:54.706456    4347 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0810 22:33:55.299688    4347 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0810 22:33:55.299726    4347 main.go:130] libmachine: Checking connection to Docker...
	I0810 22:33:55.299740    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetURL
	I0810 22:33:55.302601    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | Using libvirt version 3000000
	I0810 22:33:55.307008    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:55.307330    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:3c:a9", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:33:53 +0000 UTC Type:0 Mac:52:54:00:5f:3c:a9 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:multinode-20210810223223-30291-m02 Clientid:01:52:54:00:5f:3c:a9}
	I0810 22:33:55.307361    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined IP address 192.168.50.251 and MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:55.307525    4347 main.go:130] libmachine: Docker is up and running!
	I0810 22:33:55.307546    4347 main.go:130] libmachine: Reticulating splines...
	I0810 22:33:55.307552    4347 client.go:171] LocalClient.Create took 15.916696067s
	I0810 22:33:55.307570    4347 start.go:168] duration metric: libmachine.API.Create for "multinode-20210810223223-30291" took 15.916747981s
	I0810 22:33:55.307583    4347 start.go:267] post-start starting for "multinode-20210810223223-30291-m02" (driver="kvm2")
	I0810 22:33:55.307593    4347 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0810 22:33:55.307616    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .DriverName
	I0810 22:33:55.307845    4347 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0810 22:33:55.307873    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHHostname
	I0810 22:33:55.312135    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:55.312458    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:3c:a9", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:33:53 +0000 UTC Type:0 Mac:52:54:00:5f:3c:a9 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:multinode-20210810223223-30291-m02 Clientid:01:52:54:00:5f:3c:a9}
	I0810 22:33:55.312485    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined IP address 192.168.50.251 and MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:55.312571    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHPort
	I0810 22:33:55.312745    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHKeyPath
	I0810 22:33:55.312906    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHUsername
	I0810 22:33:55.313019    4347 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223223-30291-m02/id_rsa Username:docker}
	I0810 22:33:55.407947    4347 ssh_runner.go:149] Run: cat /etc/os-release
	I0810 22:33:55.412349    4347 command_runner.go:124] > NAME=Buildroot
	I0810 22:33:55.412365    4347 command_runner.go:124] > VERSION=2020.02.12
	I0810 22:33:55.412369    4347 command_runner.go:124] > ID=buildroot
	I0810 22:33:55.412374    4347 command_runner.go:124] > VERSION_ID=2020.02.12
	I0810 22:33:55.412379    4347 command_runner.go:124] > PRETTY_NAME="Buildroot 2020.02.12"
	I0810 22:33:55.412564    4347 info.go:137] Remote host: Buildroot 2020.02.12
	I0810 22:33:55.412587    4347 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/addons for local assets ...
	I0810 22:33:55.412651    4347 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files for local assets ...
	I0810 22:33:55.412752    4347 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/302912.pem -> 302912.pem in /etc/ssl/certs
	I0810 22:33:55.412764    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/302912.pem -> /etc/ssl/certs/302912.pem
	I0810 22:33:55.412859    4347 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0810 22:33:55.419853    4347 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/302912.pem --> /etc/ssl/certs/302912.pem (1708 bytes)
	I0810 22:33:55.438940    4347 start.go:270] post-start completed in 131.338951ms
	I0810 22:33:55.439002    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetConfigRaw
	I0810 22:33:55.439628    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetIP
	I0810 22:33:55.445097    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:55.445462    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:3c:a9", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:33:53 +0000 UTC Type:0 Mac:52:54:00:5f:3c:a9 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:multinode-20210810223223-30291-m02 Clientid:01:52:54:00:5f:3c:a9}
	I0810 22:33:55.445497    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined IP address 192.168.50.251 and MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:55.445725    4347 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/config.json ...
	I0810 22:33:55.445999    4347 start.go:129] duration metric: createHost completed in 16.070145517s
	I0810 22:33:55.446019    4347 start.go:80] releasing machines lock for "multinode-20210810223223-30291-m02", held for 16.070240884s
	I0810 22:33:55.446061    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .DriverName
	I0810 22:33:55.446337    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetIP
	I0810 22:33:55.450828    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:55.451102    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:3c:a9", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:33:53 +0000 UTC Type:0 Mac:52:54:00:5f:3c:a9 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:multinode-20210810223223-30291-m02 Clientid:01:52:54:00:5f:3c:a9}
	I0810 22:33:55.451135    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined IP address 192.168.50.251 and MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:55.453906    4347 out.go:177] * Found network options:
	I0810 22:33:55.455452    4347 out.go:177]   - NO_PROXY=192.168.50.32
	W0810 22:33:55.455496    4347 proxy.go:118] fail to check proxy env: Error ip not in block
	I0810 22:33:55.455548    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .DriverName
	I0810 22:33:55.455726    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .DriverName
	I0810 22:33:55.456208    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .DriverName
	W0810 22:33:55.456389    4347 proxy.go:118] fail to check proxy env: Error ip not in block
	I0810 22:33:55.456435    4347 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0810 22:33:55.456494    4347 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0810 22:33:55.456512    4347 ssh_runner.go:149] Run: sudo crictl images --output json
	I0810 22:33:55.456531    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHHostname
	I0810 22:33:55.456550    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHHostname
	I0810 22:33:55.461340    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:55.461671    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:3c:a9", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:33:53 +0000 UTC Type:0 Mac:52:54:00:5f:3c:a9 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:multinode-20210810223223-30291-m02 Clientid:01:52:54:00:5f:3c:a9}
	I0810 22:33:55.461718    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined IP address 192.168.50.251 and MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:55.461820    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHPort
	I0810 22:33:55.461979    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHKeyPath
	I0810 22:33:55.462124    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHUsername
	I0810 22:33:55.462275    4347 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223223-30291-m02/id_rsa Username:docker}
	I0810 22:33:55.462460    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:55.462811    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:3c:a9", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:33:53 +0000 UTC Type:0 Mac:52:54:00:5f:3c:a9 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:multinode-20210810223223-30291-m02 Clientid:01:52:54:00:5f:3c:a9}
	I0810 22:33:55.462840    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined IP address 192.168.50.251 and MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:55.462968    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHPort
	I0810 22:33:55.463133    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHKeyPath
	I0810 22:33:55.463276    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHUsername
	I0810 22:33:55.463416    4347 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223223-30291-m02/id_rsa Username:docker}
	I0810 22:33:55.566399    4347 command_runner.go:124] > <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
	I0810 22:33:55.566432    4347 command_runner.go:124] > <TITLE>302 Moved</TITLE></HEAD><BODY>
	I0810 22:33:55.566445    4347 command_runner.go:124] > <H1>302 Moved</H1>
	I0810 22:33:55.566452    4347 command_runner.go:124] > The document has moved
	I0810 22:33:55.566462    4347 command_runner.go:124] > <A HREF="https://cloud.google.com/container-registry/">here</A>.
	I0810 22:33:55.566468    4347 command_runner.go:124] > </BODY></HTML>
	I0810 22:33:55.576619    4347 command_runner.go:124] ! time="2021-08-10T22:33:55Z" level=warning msg="image connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	I0810 22:33:57.569137    4347 command_runner.go:124] ! time="2021-08-10T22:33:57Z" level=error msg="connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	I0810 22:33:59.562615    4347 command_runner.go:124] ! time="2021-08-10T22:33:59Z" level=error msg="connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	I0810 22:33:59.568157    4347 command_runner.go:124] > {
	I0810 22:33:59.568183    4347 command_runner.go:124] >   "images": [
	I0810 22:33:59.568189    4347 command_runner.go:124] >   ]
	I0810 22:33:59.568194    4347 command_runner.go:124] > }
	I0810 22:33:59.568215    4347 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.111689554s)
	I0810 22:33:59.568250    4347 crio.go:420] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.21.3". assuming images are not preloaded.
	I0810 22:33:59.568314    4347 ssh_runner.go:149] Run: which lz4
	I0810 22:33:59.572726    4347 command_runner.go:124] > /bin/lz4
	I0810 22:33:59.572967    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0810 22:33:59.573046    4347 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0810 22:33:59.577485    4347 command_runner.go:124] ! stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0810 22:33:59.577947    4347 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0810 22:33:59.577980    4347 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (576184326 bytes)
	I0810 22:34:01.734690    4347 crio.go:362] Took 2.161670 seconds to copy over tarball
	I0810 22:34:01.734769    4347 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0810 22:34:06.980628    4347 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (5.245830184s)
	I0810 22:34:06.980669    4347 crio.go:369] Took 5.245944 seconds t extract the tarball
	I0810 22:34:06.980684    4347 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0810 22:34:07.020329    4347 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0810 22:34:07.032869    4347 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0810 22:34:07.043836    4347 docker.go:153] disabling docker service ...
	I0810 22:34:07.043893    4347 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0810 22:34:07.055118    4347 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0810 22:34:07.063888    4347 command_runner.go:124] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0810 22:34:07.064198    4347 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0810 22:34:07.186863    4347 command_runner.go:124] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0810 22:34:07.186945    4347 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0810 22:34:07.324959    4347 command_runner.go:124] ! Unit docker.service does not exist, proceeding anyway.
	I0810 22:34:07.324997    4347 command_runner.go:124] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0810 22:34:07.325069    4347 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0810 22:34:07.336641    4347 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0810 22:34:07.349280    4347 command_runner.go:124] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0810 22:34:07.349301    4347 command_runner.go:124] > image-endpoint: unix:///var/run/crio/crio.sock
	I0810 22:34:07.349743    4347 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0810 22:34:07.357152    4347 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0810 22:34:07.357171    4347 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0810 22:34:07.364618    4347 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0810 22:34:07.370820    4347 command_runner.go:124] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0810 22:34:07.371014    4347 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0810 22:34:07.371064    4347 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0810 22:34:07.387991    4347 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0810 22:34:07.394493    4347 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0810 22:34:07.507775    4347 ssh_runner.go:149] Run: sudo systemctl start crio
	I0810 22:34:07.766969    4347 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0810 22:34:07.767057    4347 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0810 22:34:07.773077    4347 command_runner.go:124] >   File: /var/run/crio/crio.sock
	I0810 22:34:07.773107    4347 command_runner.go:124] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0810 22:34:07.773118    4347 command_runner.go:124] > Device: 14h/20d	Inode: 29756       Links: 1
	I0810 22:34:07.773129    4347 command_runner.go:124] > Access: (0755/srwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0810 22:34:07.773137    4347 command_runner.go:124] > Access: 2021-08-10 22:33:59.536352887 +0000
	I0810 22:34:07.773146    4347 command_runner.go:124] > Modify: 2021-08-10 22:33:55.233621889 +0000
	I0810 22:34:07.773156    4347 command_runner.go:124] > Change: 2021-08-10 22:33:55.233621889 +0000
	I0810 22:34:07.773162    4347 command_runner.go:124] >  Birth: -
	I0810 22:34:07.773277    4347 start.go:417] Will wait 60s for crictl version
	I0810 22:34:07.773351    4347 ssh_runner.go:149] Run: sudo crictl version
	I0810 22:34:07.814140    4347 command_runner.go:124] > Version:  0.1.0
	I0810 22:34:07.814169    4347 command_runner.go:124] > RuntimeName:  cri-o
	I0810 22:34:07.814177    4347 command_runner.go:124] > RuntimeVersion:  1.20.2
	I0810 22:34:07.814185    4347 command_runner.go:124] > RuntimeApiVersion:  v1alpha1
	I0810 22:34:07.814206    4347 start.go:426] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.2
	RuntimeApiVersion:  v1alpha1
	I0810 22:34:07.814280    4347 ssh_runner.go:149] Run: crio --version
	I0810 22:34:08.067036    4347 command_runner.go:124] > crio version 1.20.2
	I0810 22:34:08.067060    4347 command_runner.go:124] > Version:       1.20.2
	I0810 22:34:08.067068    4347 command_runner.go:124] > GitCommit:     d5a999ad0a35d895ded554e1e18c142075501a98
	I0810 22:34:08.067072    4347 command_runner.go:124] > GitTreeState:  clean
	I0810 22:34:08.067079    4347 command_runner.go:124] > BuildDate:     2021-08-06T09:19:16Z
	I0810 22:34:08.067090    4347 command_runner.go:124] > GoVersion:     go1.13.15
	I0810 22:34:08.067094    4347 command_runner.go:124] > Compiler:      gc
	I0810 22:34:08.067099    4347 command_runner.go:124] > Platform:      linux/amd64
	I0810 22:34:08.068506    4347 command_runner.go:124] ! time="2021-08-10T22:34:08Z" level=info msg="Starting CRI-O, version: 1.20.2, git: d5a999ad0a35d895ded554e1e18c142075501a98(clean)"
	I0810 22:34:08.068608    4347 ssh_runner.go:149] Run: crio --version
	I0810 22:34:08.350482    4347 command_runner.go:124] > crio version 1.20.2
	I0810 22:34:08.350507    4347 command_runner.go:124] > Version:       1.20.2
	I0810 22:34:08.350514    4347 command_runner.go:124] > GitCommit:     d5a999ad0a35d895ded554e1e18c142075501a98
	I0810 22:34:08.350519    4347 command_runner.go:124] > GitTreeState:  clean
	I0810 22:34:08.350525    4347 command_runner.go:124] > BuildDate:     2021-08-06T09:19:16Z
	I0810 22:34:08.350529    4347 command_runner.go:124] > GoVersion:     go1.13.15
	I0810 22:34:08.350533    4347 command_runner.go:124] > Compiler:      gc
	I0810 22:34:08.350538    4347 command_runner.go:124] > Platform:      linux/amd64
	I0810 22:34:08.351350    4347 command_runner.go:124] ! time="2021-08-10T22:34:08Z" level=info msg="Starting CRI-O, version: 1.20.2, git: d5a999ad0a35d895ded554e1e18c142075501a98(clean)"
	I0810 22:34:10.019648    4347 out.go:177] * Preparing Kubernetes v1.21.3 on CRI-O 1.20.2 ...
	I0810 22:34:10.105794    4347 out.go:177]   - env NO_PROXY=192.168.50.32
	I0810 22:34:10.105875    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetIP
	I0810 22:34:10.112380    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:34:10.112803    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:3c:a9", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:33:53 +0000 UTC Type:0 Mac:52:54:00:5f:3c:a9 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:multinode-20210810223223-30291-m02 Clientid:01:52:54:00:5f:3c:a9}
	I0810 22:34:10.112844    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined IP address 192.168.50.251 and MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:34:10.113087    4347 ssh_runner.go:149] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0810 22:34:10.118195    4347 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0810 22:34:10.131148    4347 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291 for IP: 192.168.50.251
	I0810 22:34:10.131224    4347 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key
	I0810 22:34:10.131247    4347 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key
	I0810 22:34:10.131266    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0810 22:34:10.131289    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0810 22:34:10.131302    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0810 22:34:10.131314    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0810 22:34:10.131385    4347 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/30291.pem (1338 bytes)
	W0810 22:34:10.131437    4347 certs.go:369] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/30291_empty.pem, impossibly tiny 0 bytes
	I0810 22:34:10.131458    4347 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem (1679 bytes)
	I0810 22:34:10.131509    4347 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem (1082 bytes)
	I0810 22:34:10.131548    4347 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem (1123 bytes)
	I0810 22:34:10.131581    4347 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem (1679 bytes)
	I0810 22:34:10.131690    4347 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/302912.pem (1708 bytes)
	I0810 22:34:10.131731    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/30291.pem -> /usr/share/ca-certificates/30291.pem
	I0810 22:34:10.131749    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/302912.pem -> /usr/share/ca-certificates/302912.pem
	I0810 22:34:10.131765    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0810 22:34:10.132310    4347 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0810 22:34:10.150840    4347 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0810 22:34:10.167571    4347 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0810 22:34:10.185093    4347 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0810 22:34:10.201630    4347 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/30291.pem --> /usr/share/ca-certificates/30291.pem (1338 bytes)
	I0810 22:34:10.218132    4347 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/302912.pem --> /usr/share/ca-certificates/302912.pem (1708 bytes)
	I0810 22:34:10.236656    4347 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0810 22:34:10.254374    4347 ssh_runner.go:149] Run: openssl version
	I0810 22:34:10.260368    4347 command_runner.go:124] > OpenSSL 1.1.1k  25 Mar 2021
	I0810 22:34:10.260962    4347 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/30291.pem && ln -fs /usr/share/ca-certificates/30291.pem /etc/ssl/certs/30291.pem"
	I0810 22:34:10.269493    4347 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/30291.pem
	I0810 22:34:10.274088    4347 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Aug 10 22:27 /usr/share/ca-certificates/30291.pem
	I0810 22:34:10.274255    4347 certs.go:416] hashing: -rw-r--r-- 1 root root 1338 Aug 10 22:27 /usr/share/ca-certificates/30291.pem
	I0810 22:34:10.274292    4347 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/30291.pem
	I0810 22:34:10.280076    4347 command_runner.go:124] > 51391683
	I0810 22:34:10.280380    4347 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/30291.pem /etc/ssl/certs/51391683.0"
	I0810 22:34:10.288896    4347 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/302912.pem && ln -fs /usr/share/ca-certificates/302912.pem /etc/ssl/certs/302912.pem"
	I0810 22:34:10.297176    4347 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/302912.pem
	I0810 22:34:10.302118    4347 command_runner.go:124] > -rw-r--r-- 1 root root 1708 Aug 10 22:27 /usr/share/ca-certificates/302912.pem
	I0810 22:34:10.302149    4347 certs.go:416] hashing: -rw-r--r-- 1 root root 1708 Aug 10 22:27 /usr/share/ca-certificates/302912.pem
	I0810 22:34:10.302187    4347 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/302912.pem
	I0810 22:34:10.308450    4347 command_runner.go:124] > 3ec20f2e
	I0810 22:34:10.308502    4347 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/302912.pem /etc/ssl/certs/3ec20f2e.0"
	I0810 22:34:10.316573    4347 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0810 22:34:10.324873    4347 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0810 22:34:10.329713    4347 command_runner.go:124] > -rw-r--r-- 1 root root 1111 Aug 10 22:18 /usr/share/ca-certificates/minikubeCA.pem
	I0810 22:34:10.329750    4347 certs.go:416] hashing: -rw-r--r-- 1 root root 1111 Aug 10 22:18 /usr/share/ca-certificates/minikubeCA.pem
	I0810 22:34:10.329791    4347 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0810 22:34:10.335731    4347 command_runner.go:124] > b5213941
	I0810 22:34:10.335799    4347 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0810 22:34:10.345248    4347 ssh_runner.go:149] Run: crio config
	I0810 22:34:10.593377    4347 command_runner.go:124] ! time="2021-08-10T22:34:10Z" level=info msg="Starting CRI-O, version: 1.20.2, git: d5a999ad0a35d895ded554e1e18c142075501a98(clean)"
	I0810 22:34:10.594877    4347 command_runner.go:124] ! time="2021-08-10T22:34:10Z" level=warning msg="The 'registries' option in crio.conf(5) (referenced in \"/etc/crio/crio.conf\") has been deprecated and will be removed with CRI-O 1.21."
	I0810 22:34:10.594940    4347 command_runner.go:124] ! time="2021-08-10T22:34:10Z" level=warning msg="Please refer to containers-registries.conf(5) for configuring unqualified-search registries."
	I0810 22:34:10.597390    4347 command_runner.go:124] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0810 22:34:10.605019    4347 command_runner.go:124] > # The CRI-O configuration file specifies all of the available configuration
	I0810 22:34:10.605047    4347 command_runner.go:124] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0810 22:34:10.605062    4347 command_runner.go:124] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0810 22:34:10.605068    4347 command_runner.go:124] > #
	I0810 22:34:10.605083    4347 command_runner.go:124] > # Please refer to crio.conf(5) for details of all configuration options.
	I0810 22:34:10.605096    4347 command_runner.go:124] > # CRI-O supports partial configuration reload during runtime, which can be
	I0810 22:34:10.605110    4347 command_runner.go:124] > # done by sending SIGHUP to the running process. Currently supported options
	I0810 22:34:10.605121    4347 command_runner.go:124] > # are explicitly mentioned with: 'This option supports live configuration
	I0810 22:34:10.605127    4347 command_runner.go:124] > # reload'.
	I0810 22:34:10.605134    4347 command_runner.go:124] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0810 22:34:10.605143    4347 command_runner.go:124] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0810 22:34:10.605152    4347 command_runner.go:124] > # you want to change the system's defaults. If you want to modify storage just
	I0810 22:34:10.605163    4347 command_runner.go:124] > # for CRI-O, you can change the storage configuration options here.
	I0810 22:34:10.605170    4347 command_runner.go:124] > [crio]
	I0810 22:34:10.605177    4347 command_runner.go:124] > # Path to the "root directory". CRI-O stores all of its data, including
	I0810 22:34:10.605184    4347 command_runner.go:124] > # containers images, in this directory.
	I0810 22:34:10.605189    4347 command_runner.go:124] > #root = "/var/lib/containers/storage"
	I0810 22:34:10.605204    4347 command_runner.go:124] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0810 22:34:10.605215    4347 command_runner.go:124] > #runroot = "/var/run/containers/storage"
	I0810 22:34:10.605229    4347 command_runner.go:124] > # Storage driver used to manage the storage of images and containers. Please
	I0810 22:34:10.605242    4347 command_runner.go:124] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0810 22:34:10.605252    4347 command_runner.go:124] > #storage_driver = "overlay"
	I0810 22:34:10.605261    4347 command_runner.go:124] > # List to pass options to the storage driver. Please refer to
	I0810 22:34:10.605273    4347 command_runner.go:124] > # containers-storage.conf(5) to see all available storage options.
	I0810 22:34:10.605282    4347 command_runner.go:124] > #storage_option = [
	I0810 22:34:10.605287    4347 command_runner.go:124] > #]
	I0810 22:34:10.605302    4347 command_runner.go:124] > # The default log directory where all logs will go unless directly specified by
	I0810 22:34:10.605314    4347 command_runner.go:124] > # the kubelet. The log directory specified must be an absolute directory.
	I0810 22:34:10.605324    4347 command_runner.go:124] > log_dir = "/var/log/crio/pods"
	I0810 22:34:10.605336    4347 command_runner.go:124] > # Location for CRI-O to lay down the temporary version file.
	I0810 22:34:10.605348    4347 command_runner.go:124] > # It is used to check if crio wipe should wipe containers, which should
	I0810 22:34:10.605355    4347 command_runner.go:124] > # always happen on a node reboot
	I0810 22:34:10.605360    4347 command_runner.go:124] > version_file = "/var/run/crio/version"
	I0810 22:34:10.605368    4347 command_runner.go:124] > # Location for CRI-O to lay down the persistent version file.
	I0810 22:34:10.605375    4347 command_runner.go:124] > # It is used to check if crio wipe should wipe images, which should
	I0810 22:34:10.605384    4347 command_runner.go:124] > # only happen when CRI-O has been upgraded
	I0810 22:34:10.605398    4347 command_runner.go:124] > version_file_persist = "/var/lib/crio/version"
	I0810 22:34:10.605407    4347 command_runner.go:124] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0810 22:34:10.605412    4347 command_runner.go:124] > [crio.api]
	I0810 22:34:10.605420    4347 command_runner.go:124] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0810 22:34:10.605425    4347 command_runner.go:124] > listen = "/var/run/crio/crio.sock"
	I0810 22:34:10.605432    4347 command_runner.go:124] > # IP address on which the stream server will listen.
	I0810 22:34:10.605437    4347 command_runner.go:124] > stream_address = "127.0.0.1"
	I0810 22:34:10.605444    4347 command_runner.go:124] > # The port on which the stream server will listen. If the port is set to "0", then
	I0810 22:34:10.605452    4347 command_runner.go:124] > # CRI-O will allocate a random free port number.
	I0810 22:34:10.605456    4347 command_runner.go:124] > stream_port = "0"
	I0810 22:34:10.605462    4347 command_runner.go:124] > # Enable encrypted TLS transport of the stream server.
	I0810 22:34:10.605467    4347 command_runner.go:124] > stream_enable_tls = false
	I0810 22:34:10.605476    4347 command_runner.go:124] > # Length of time until open streams terminate due to lack of activity
	I0810 22:34:10.605481    4347 command_runner.go:124] > stream_idle_timeout = ""
	I0810 22:34:10.605487    4347 command_runner.go:124] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0810 22:34:10.605496    4347 command_runner.go:124] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0810 22:34:10.605503    4347 command_runner.go:124] > # minutes.
	I0810 22:34:10.605506    4347 command_runner.go:124] > stream_tls_cert = ""
	I0810 22:34:10.605513    4347 command_runner.go:124] > # Path to the key file used to serve the encrypted stream. This file can
	I0810 22:34:10.605521    4347 command_runner.go:124] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0810 22:34:10.605525    4347 command_runner.go:124] > stream_tls_key = ""
	I0810 22:34:10.605531    4347 command_runner.go:124] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0810 22:34:10.605540    4347 command_runner.go:124] > # communication with the encrypted stream. This file can change and CRI-O will
	I0810 22:34:10.605546    4347 command_runner.go:124] > # automatically pick up the changes within 5 minutes.
	I0810 22:34:10.605552    4347 command_runner.go:124] > stream_tls_ca = ""
	I0810 22:34:10.605560    4347 command_runner.go:124] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0810 22:34:10.605567    4347 command_runner.go:124] > grpc_max_send_msg_size = 16777216
	I0810 22:34:10.605576    4347 command_runner.go:124] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0810 22:34:10.605582    4347 command_runner.go:124] > grpc_max_recv_msg_size = 16777216
	I0810 22:34:10.605589    4347 command_runner.go:124] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0810 22:34:10.605597    4347 command_runner.go:124] > # and options for how to set up and manage the OCI runtime.
	I0810 22:34:10.605601    4347 command_runner.go:124] > [crio.runtime]
	I0810 22:34:10.605607    4347 command_runner.go:124] > # A list of ulimits to be set in containers by default, specified as
	I0810 22:34:10.605614    4347 command_runner.go:124] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0810 22:34:10.605618    4347 command_runner.go:124] > # "nofile=1024:2048"
	I0810 22:34:10.605624    4347 command_runner.go:124] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0810 22:34:10.605630    4347 command_runner.go:124] > #default_ulimits = [
	I0810 22:34:10.605633    4347 command_runner.go:124] > #]
	I0810 22:34:10.605640    4347 command_runner.go:124] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0810 22:34:10.605647    4347 command_runner.go:124] > no_pivot = false
	I0810 22:34:10.605656    4347 command_runner.go:124] > # decryption_keys_path is the path where the keys required for
	I0810 22:34:10.605671    4347 command_runner.go:124] > # image decryption are stored. This option supports live configuration reload.
	I0810 22:34:10.605679    4347 command_runner.go:124] > decryption_keys_path = "/etc/crio/keys/"
	I0810 22:34:10.605685    4347 command_runner.go:124] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0810 22:34:10.605692    4347 command_runner.go:124] > # Will be searched for using $PATH if empty.
	I0810 22:34:10.605699    4347 command_runner.go:124] > conmon = "/usr/libexec/crio/conmon"
	I0810 22:34:10.605706    4347 command_runner.go:124] > # Cgroup setting for conmon
	I0810 22:34:10.605711    4347 command_runner.go:124] > conmon_cgroup = "system.slice"
	I0810 22:34:10.605726    4347 command_runner.go:124] > # Environment variable list for the conmon process, used for passing necessary
	I0810 22:34:10.605734    4347 command_runner.go:124] > # environment variables to conmon or the runtime.
	I0810 22:34:10.605738    4347 command_runner.go:124] > conmon_env = [
	I0810 22:34:10.605744    4347 command_runner.go:124] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0810 22:34:10.605749    4347 command_runner.go:124] > ]
	I0810 22:34:10.605755    4347 command_runner.go:124] > # Additional environment variables to set for all the
	I0810 22:34:10.605770    4347 command_runner.go:124] > # containers. These are overridden if set in the
	I0810 22:34:10.605776    4347 command_runner.go:124] > # container image spec or in the container runtime configuration.
	I0810 22:34:10.605783    4347 command_runner.go:124] > default_env = [
	I0810 22:34:10.605786    4347 command_runner.go:124] > ]
	I0810 22:34:10.605792    4347 command_runner.go:124] > # If true, SELinux will be used for pod separation on the host.
	I0810 22:34:10.605798    4347 command_runner.go:124] > selinux = false
	I0810 22:34:10.605805    4347 command_runner.go:124] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0810 22:34:10.605814    4347 command_runner.go:124] > # for the runtime. If not specified, then the internal default seccomp profile
	I0810 22:34:10.605820    4347 command_runner.go:124] > # will be used. This option supports live configuration reload.
	I0810 22:34:10.605826    4347 command_runner.go:124] > seccomp_profile = ""
	I0810 22:34:10.605835    4347 command_runner.go:124] > # Changes the meaning of an empty seccomp profile. By default
	I0810 22:34:10.605845    4347 command_runner.go:124] > # (and according to CRI spec), an empty profile means unconfined.
	I0810 22:34:10.605851    4347 command_runner.go:124] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0810 22:34:10.605860    4347 command_runner.go:124] > # which might increase security.
	I0810 22:34:10.605865    4347 command_runner.go:124] > seccomp_use_default_when_empty = false
	I0810 22:34:10.605874    4347 command_runner.go:124] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0810 22:34:10.605881    4347 command_runner.go:124] > # profile name is "crio-default". This profile only takes effect if the user
	I0810 22:34:10.605890    4347 command_runner.go:124] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0810 22:34:10.605896    4347 command_runner.go:124] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0810 22:34:10.605904    4347 command_runner.go:124] > # This option supports live configuration reload.
	I0810 22:34:10.605908    4347 command_runner.go:124] > apparmor_profile = "crio-default"
	I0810 22:34:10.605916    4347 command_runner.go:124] > # Used to change irqbalance service config file path which is used for configuring
	I0810 22:34:10.605922    4347 command_runner.go:124] > # irqbalance daemon.
	I0810 22:34:10.605927    4347 command_runner.go:124] > irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0810 22:34:10.605935    4347 command_runner.go:124] > # Cgroup management implementation used for the runtime.
	I0810 22:34:10.605941    4347 command_runner.go:124] > cgroup_manager = "systemd"
	I0810 22:34:10.605949    4347 command_runner.go:124] > # Specify whether the image pull must be performed in a separate cgroup.
	I0810 22:34:10.605954    4347 command_runner.go:124] > separate_pull_cgroup = ""
	I0810 22:34:10.605960    4347 command_runner.go:124] > # List of default capabilities for containers. If it is empty or commented out,
	I0810 22:34:10.605969    4347 command_runner.go:124] > # only the capabilities defined in the containers json file by the user/kube
	I0810 22:34:10.605973    4347 command_runner.go:124] > # will be added.
	I0810 22:34:10.605977    4347 command_runner.go:124] > default_capabilities = [
	I0810 22:34:10.605980    4347 command_runner.go:124] > 	"CHOWN",
	I0810 22:34:10.605985    4347 command_runner.go:124] > 	"DAC_OVERRIDE",
	I0810 22:34:10.605994    4347 command_runner.go:124] > 	"FSETID",
	I0810 22:34:10.605999    4347 command_runner.go:124] > 	"FOWNER",
	I0810 22:34:10.606005    4347 command_runner.go:124] > 	"SETGID",
	I0810 22:34:10.606010    4347 command_runner.go:124] > 	"SETUID",
	I0810 22:34:10.606015    4347 command_runner.go:124] > 	"SETPCAP",
	I0810 22:34:10.606021    4347 command_runner.go:124] > 	"NET_BIND_SERVICE",
	I0810 22:34:10.606027    4347 command_runner.go:124] > 	"KILL",
	I0810 22:34:10.606031    4347 command_runner.go:124] > ]
	I0810 22:34:10.606041    4347 command_runner.go:124] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0810 22:34:10.606051    4347 command_runner.go:124] > # defined in the container json file by the user/kube will be added.
	I0810 22:34:10.606057    4347 command_runner.go:124] > default_sysctls = [
	I0810 22:34:10.606063    4347 command_runner.go:124] > ]
	I0810 22:34:10.606070    4347 command_runner.go:124] > # List of additional devices. specified as
	I0810 22:34:10.606082    4347 command_runner.go:124] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0810 22:34:10.606092    4347 command_runner.go:124] > #If it is empty or commented out, only the devices
	I0810 22:34:10.606101    4347 command_runner.go:124] > # defined in the container json file by the user/kube will be added.
	I0810 22:34:10.606108    4347 command_runner.go:124] > additional_devices = [
	I0810 22:34:10.606113    4347 command_runner.go:124] > ]
	I0810 22:34:10.606124    4347 command_runner.go:124] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0810 22:34:10.606134    4347 command_runner.go:124] > # directories does not exist, then CRI-O will automatically skip them.
	I0810 22:34:10.606140    4347 command_runner.go:124] > hooks_dir = [
	I0810 22:34:10.606148    4347 command_runner.go:124] > 	"/usr/share/containers/oci/hooks.d",
	I0810 22:34:10.606152    4347 command_runner.go:124] > ]
	I0810 22:34:10.606163    4347 command_runner.go:124] > # Path to the file specifying the defaults mounts for each container. The
	I0810 22:34:10.606173    4347 command_runner.go:124] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0810 22:34:10.606183    4347 command_runner.go:124] > # its default mounts from the following two files:
	I0810 22:34:10.606188    4347 command_runner.go:124] > #
	I0810 22:34:10.606200    4347 command_runner.go:124] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0810 22:34:10.606207    4347 command_runner.go:124] > #      override file, where users can either add in their own default mounts, or
	I0810 22:34:10.606216    4347 command_runner.go:124] > #      override the default mounts shipped with the package.
	I0810 22:34:10.606222    4347 command_runner.go:124] > #
	I0810 22:34:10.606228    4347 command_runner.go:124] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0810 22:34:10.606236    4347 command_runner.go:124] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0810 22:34:10.606243    4347 command_runner.go:124] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0810 22:34:10.606249    4347 command_runner.go:124] > #      only add mounts it finds in this file.
	I0810 22:34:10.606253    4347 command_runner.go:124] > #
	I0810 22:34:10.606257    4347 command_runner.go:124] > #default_mounts_file = ""
	I0810 22:34:10.606262    4347 command_runner.go:124] > # Maximum number of processes allowed in a container.
	I0810 22:34:10.606267    4347 command_runner.go:124] > pids_limit = 1024
	I0810 22:34:10.606273    4347 command_runner.go:124] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0810 22:34:10.606280    4347 command_runner.go:124] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0810 22:34:10.606287    4347 command_runner.go:124] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0810 22:34:10.606295    4347 command_runner.go:124] > # limit is never exceeded.
	I0810 22:34:10.606300    4347 command_runner.go:124] > log_size_max = -1
	I0810 22:34:10.606322    4347 command_runner.go:124] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0810 22:34:10.606329    4347 command_runner.go:124] > log_to_journald = false
	I0810 22:34:10.606335    4347 command_runner.go:124] > # Path to directory in which container exit files are written to by conmon.
	I0810 22:34:10.606342    4347 command_runner.go:124] > container_exits_dir = "/var/run/crio/exits"
	I0810 22:34:10.606347    4347 command_runner.go:124] > # Path to directory for container attach sockets.
	I0810 22:34:10.606352    4347 command_runner.go:124] > container_attach_socket_dir = "/var/run/crio"
	I0810 22:34:10.606358    4347 command_runner.go:124] > # The prefix to use for the source of the bind mounts.
	I0810 22:34:10.606362    4347 command_runner.go:124] > bind_mount_prefix = ""
	I0810 22:34:10.606368    4347 command_runner.go:124] > # If set to true, all containers will run in read-only mode.
	I0810 22:34:10.606372    4347 command_runner.go:124] > read_only = false
	I0810 22:34:10.606378    4347 command_runner.go:124] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0810 22:34:10.606387    4347 command_runner.go:124] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0810 22:34:10.606391    4347 command_runner.go:124] > # live configuration reload.
	I0810 22:34:10.606395    4347 command_runner.go:124] > log_level = "info"
	I0810 22:34:10.606403    4347 command_runner.go:124] > # Filter the log messages by the provided regular expression.
	I0810 22:34:10.606409    4347 command_runner.go:124] > # This option supports live configuration reload.
	I0810 22:34:10.606413    4347 command_runner.go:124] > log_filter = ""
	I0810 22:34:10.606419    4347 command_runner.go:124] > # The UID mappings for the user namespace of each container. A range is
	I0810 22:34:10.606426    4347 command_runner.go:124] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0810 22:34:10.606430    4347 command_runner.go:124] > # separated by comma.
	I0810 22:34:10.606433    4347 command_runner.go:124] > uid_mappings = ""
	I0810 22:34:10.606440    4347 command_runner.go:124] > # The GID mappings for the user namespace of each container. A range is
	I0810 22:34:10.606446    4347 command_runner.go:124] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0810 22:34:10.606451    4347 command_runner.go:124] > # separated by comma.
	I0810 22:34:10.606454    4347 command_runner.go:124] > gid_mappings = ""
	I0810 22:34:10.606463    4347 command_runner.go:124] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0810 22:34:10.606475    4347 command_runner.go:124] > # regarding the proper termination of the container. The lowest possible
	I0810 22:34:10.606484    4347 command_runner.go:124] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0810 22:34:10.606490    4347 command_runner.go:124] > ctr_stop_timeout = 30
	I0810 22:34:10.606500    4347 command_runner.go:124] > # manage_ns_lifecycle determines whether we pin and remove namespaces
	I0810 22:34:10.606507    4347 command_runner.go:124] > # and manage their lifecycle.
	I0810 22:34:10.606514    4347 command_runner.go:124] > # This option is being deprecated, and will be unconditionally true in the future.
	I0810 22:34:10.606518    4347 command_runner.go:124] > manage_ns_lifecycle = true
	I0810 22:34:10.606524    4347 command_runner.go:124] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0810 22:34:10.606532    4347 command_runner.go:124] > # when a pod does not have a private PID namespace, and does not use
	I0810 22:34:10.606537    4347 command_runner.go:124] > # a kernel separating runtime (like kata).
	I0810 22:34:10.606544    4347 command_runner.go:124] > # It requires manage_ns_lifecycle to be true.
	I0810 22:34:10.606549    4347 command_runner.go:124] > drop_infra_ctr = false
	I0810 22:34:10.606555    4347 command_runner.go:124] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0810 22:34:10.606565    4347 command_runner.go:124] > # You can use linux CPU list format to specify desired CPUs.
	I0810 22:34:10.606578    4347 command_runner.go:124] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0810 22:34:10.606587    4347 command_runner.go:124] > # infra_ctr_cpuset = ""
	I0810 22:34:10.606596    4347 command_runner.go:124] > # The directory where the state of the managed namespaces gets tracked.
	I0810 22:34:10.606605    4347 command_runner.go:124] > # Only used when manage_ns_lifecycle is true.
	I0810 22:34:10.606610    4347 command_runner.go:124] > namespaces_dir = "/var/run"
	I0810 22:34:10.606620    4347 command_runner.go:124] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0810 22:34:10.606624    4347 command_runner.go:124] > pinns_path = "/usr/bin/pinns"
	I0810 22:34:10.606631    4347 command_runner.go:124] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0810 22:34:10.606639    4347 command_runner.go:124] > # The name is matched against the runtimes map below. If this value is changed,
	I0810 22:34:10.606645    4347 command_runner.go:124] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0810 22:34:10.606650    4347 command_runner.go:124] > default_runtime = "runc"
	I0810 22:34:10.606657    4347 command_runner.go:124] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0810 22:34:10.606664    4347 command_runner.go:124] > # The runtime to use is picked based on the runtime_handler provided by the CRI.
	I0810 22:34:10.606671    4347 command_runner.go:124] > # If no runtime_handler is provided, the runtime will be picked based on the level
	I0810 22:34:10.606679    4347 command_runner.go:124] > # of trust of the workload. Each entry in the table should follow the format:
	I0810 22:34:10.606682    4347 command_runner.go:124] > #
	I0810 22:34:10.606687    4347 command_runner.go:124] > #[crio.runtime.runtimes.runtime-handler]
	I0810 22:34:10.606693    4347 command_runner.go:124] > #  runtime_path = "/path/to/the/executable"
	I0810 22:34:10.606697    4347 command_runner.go:124] > #  runtime_type = "oci"
	I0810 22:34:10.606702    4347 command_runner.go:124] > #  runtime_root = "/path/to/the/root"
	I0810 22:34:10.606708    4347 command_runner.go:124] > #  privileged_without_host_devices = false
	I0810 22:34:10.606712    4347 command_runner.go:124] > #  allowed_annotations = []
	I0810 22:34:10.606719    4347 command_runner.go:124] > # Where:
	I0810 22:34:10.606725    4347 command_runner.go:124] > # - runtime-handler: name used to identify the runtime
	I0810 22:34:10.606734    4347 command_runner.go:124] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0810 22:34:10.606743    4347 command_runner.go:124] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0810 22:34:10.606750    4347 command_runner.go:124] > #   the runtime executable name, and the runtime executable should be placed
	I0810 22:34:10.606753    4347 command_runner.go:124] > #   in $PATH.
	I0810 22:34:10.606760    4347 command_runner.go:124] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0810 22:34:10.606766    4347 command_runner.go:124] > #   omitted, an "oci" runtime is assumed.
	I0810 22:34:10.606772    4347 command_runner.go:124] > # - runtime_root (optional, string): root directory for storage of containers
	I0810 22:34:10.606775    4347 command_runner.go:124] > #   state.
	I0810 22:34:10.606782    4347 command_runner.go:124] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0810 22:34:10.606789    4347 command_runner.go:124] > #   host devices from being passed to privileged containers.
	I0810 22:34:10.606795    4347 command_runner.go:124] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0810 22:34:10.606805    4347 command_runner.go:124] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0810 22:34:10.606811    4347 command_runner.go:124] > #   The currently recognized values are:
	I0810 22:34:10.606818    4347 command_runner.go:124] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0810 22:34:10.606825    4347 command_runner.go:124] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0810 22:34:10.606831    4347 command_runner.go:124] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0810 22:34:10.606836    4347 command_runner.go:124] > [crio.runtime.runtimes.runc]
	I0810 22:34:10.606841    4347 command_runner.go:124] > runtime_path = "/usr/bin/runc"
	I0810 22:34:10.606845    4347 command_runner.go:124] > runtime_type = "oci"
	I0810 22:34:10.606849    4347 command_runner.go:124] > runtime_root = "/run/runc"
	I0810 22:34:10.606856    4347 command_runner.go:124] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0810 22:34:10.606861    4347 command_runner.go:124] > # running containers
	I0810 22:34:10.606865    4347 command_runner.go:124] > #[crio.runtime.runtimes.crun]
	I0810 22:34:10.606873    4347 command_runner.go:124] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0810 22:34:10.606882    4347 command_runner.go:124] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0810 22:34:10.606888    4347 command_runner.go:124] > # surface and mitigating the consequences of containers breakout.
	I0810 22:34:10.606895    4347 command_runner.go:124] > # Kata Containers with the default configured VMM
	I0810 22:34:10.606900    4347 command_runner.go:124] > #[crio.runtime.runtimes.kata-runtime]
	I0810 22:34:10.606904    4347 command_runner.go:124] > # Kata Containers with the QEMU VMM
	I0810 22:34:10.606910    4347 command_runner.go:124] > #[crio.runtime.runtimes.kata-qemu]
	I0810 22:34:10.606914    4347 command_runner.go:124] > # Kata Containers with the Firecracker VMM
	I0810 22:34:10.606919    4347 command_runner.go:124] > #[crio.runtime.runtimes.kata-fc]
	I0810 22:34:10.606926    4347 command_runner.go:124] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0810 22:34:10.606931    4347 command_runner.go:124] > #
	I0810 22:34:10.606937    4347 command_runner.go:124] > # CRI-O reads its configured registries defaults from the system wide
	I0810 22:34:10.606943    4347 command_runner.go:124] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0810 22:34:10.606950    4347 command_runner.go:124] > # you want to modify just CRI-O, you can change the registries configuration in
	I0810 22:34:10.606957    4347 command_runner.go:124] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0810 22:34:10.606964    4347 command_runner.go:124] > # use the system's defaults from /etc/containers/registries.conf.
	I0810 22:34:10.606970    4347 command_runner.go:124] > [crio.image]
	I0810 22:34:10.606977    4347 command_runner.go:124] > # Default transport for pulling images from a remote container storage.
	I0810 22:34:10.606981    4347 command_runner.go:124] > default_transport = "docker://"
	I0810 22:34:10.606991    4347 command_runner.go:124] > # The path to a file containing credentials necessary for pulling images from
	I0810 22:34:10.607004    4347 command_runner.go:124] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0810 22:34:10.607010    4347 command_runner.go:124] > global_auth_file = ""
	I0810 22:34:10.607019    4347 command_runner.go:124] > # The image used to instantiate infra containers.
	I0810 22:34:10.607026    4347 command_runner.go:124] > # This option supports live configuration reload.
	I0810 22:34:10.607035    4347 command_runner.go:124] > pause_image = "k8s.gcr.io/pause:3.4.1"
	I0810 22:34:10.607045    4347 command_runner.go:124] > # The path to a file containing credentials specific for pulling the pause_image from
	I0810 22:34:10.607058    4347 command_runner.go:124] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0810 22:34:10.607068    4347 command_runner.go:124] > # This option supports live configuration reload.
	I0810 22:34:10.607073    4347 command_runner.go:124] > pause_image_auth_file = ""
	I0810 22:34:10.607083    4347 command_runner.go:124] > # The command to run to have a container stay in the paused state.
	I0810 22:34:10.607092    4347 command_runner.go:124] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0810 22:34:10.607104    4347 command_runner.go:124] > # specified in the pause image. When commented out, it will fallback to the
	I0810 22:34:10.607116    4347 command_runner.go:124] > # default: "/pause". This option supports live configuration reload.
	I0810 22:34:10.607125    4347 command_runner.go:124] > pause_command = "/pause"
	I0810 22:34:10.607135    4347 command_runner.go:124] > # Path to the file which decides what sort of policy we use when deciding
	I0810 22:34:10.607148    4347 command_runner.go:124] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0810 22:34:10.607159    4347 command_runner.go:124] > # this option be used, as the default behavior of using the system-wide default
	I0810 22:34:10.607171    4347 command_runner.go:124] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0810 22:34:10.607180    4347 command_runner.go:124] > # refer to containers-policy.json(5) for more details.
	I0810 22:34:10.607186    4347 command_runner.go:124] > signature_policy = ""
	I0810 22:34:10.607197    4347 command_runner.go:124] > # List of registries to skip TLS verification for pulling images. Please
	I0810 22:34:10.607205    4347 command_runner.go:124] > # consider configuring the registries via /etc/containers/registries.conf before
	I0810 22:34:10.607213    4347 command_runner.go:124] > # changing them here.
	I0810 22:34:10.607219    4347 command_runner.go:124] > #insecure_registries = "[]"
	I0810 22:34:10.607230    4347 command_runner.go:124] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0810 22:34:10.607240    4347 command_runner.go:124] > # ignore; the latter will ignore volumes entirely.
	I0810 22:34:10.607246    4347 command_runner.go:124] > image_volumes = "mkdir"
	I0810 22:34:10.607261    4347 command_runner.go:124] > # List of registries to be used when pulling an unqualified image (e.g.,
	I0810 22:34:10.607273    4347 command_runner.go:124] > # "alpine:latest"). By default, registries is set to "docker.io" for
	I0810 22:34:10.607280    4347 command_runner.go:124] > # compatibility reasons. Depending on your workload and usecase you may add more
	I0810 22:34:10.607287    4347 command_runner.go:124] > # registries (e.g., "quay.io", "registry.fedoraproject.org",
	I0810 22:34:10.607292    4347 command_runner.go:124] > # "registry.opensuse.org", etc.).
	I0810 22:34:10.607296    4347 command_runner.go:124] > #registries = [
	I0810 22:34:10.607300    4347 command_runner.go:124] > # 	"docker.io",
	I0810 22:34:10.607303    4347 command_runner.go:124] > #]
	I0810 22:34:10.607311    4347 command_runner.go:124] > # Temporary directory to use for storing big files
	I0810 22:34:10.607316    4347 command_runner.go:124] > big_files_temporary_dir = ""
	I0810 22:34:10.607323    4347 command_runner.go:124] > # The crio.network table containers settings pertaining to the management of
	I0810 22:34:10.607327    4347 command_runner.go:124] > # CNI plugins.
	I0810 22:34:10.607331    4347 command_runner.go:124] > [crio.network]
	I0810 22:34:10.607337    4347 command_runner.go:124] > # The default CNI network name to be selected. If not set or "", then
	I0810 22:34:10.607343    4347 command_runner.go:124] > # CRI-O will pick-up the first one found in network_dir.
	I0810 22:34:10.607348    4347 command_runner.go:124] > # cni_default_network = "kindnet"
	I0810 22:34:10.607355    4347 command_runner.go:124] > # Path to the directory where CNI configuration files are located.
	I0810 22:34:10.607360    4347 command_runner.go:124] > network_dir = "/etc/cni/net.d/"
	I0810 22:34:10.607368    4347 command_runner.go:124] > # Paths to directories where CNI plugin binaries are located.
	I0810 22:34:10.607374    4347 command_runner.go:124] > plugin_dirs = [
	I0810 22:34:10.607379    4347 command_runner.go:124] > 	"/opt/cni/bin/",
	I0810 22:34:10.607382    4347 command_runner.go:124] > ]
	I0810 22:34:10.607388    4347 command_runner.go:124] > # A necessary configuration for Prometheus based metrics retrieval
	I0810 22:34:10.607392    4347 command_runner.go:124] > [crio.metrics]
	I0810 22:34:10.607397    4347 command_runner.go:124] > # Globally enable or disable metrics support.
	I0810 22:34:10.607403    4347 command_runner.go:124] > enable_metrics = true
	I0810 22:34:10.607408    4347 command_runner.go:124] > # The port on which the metrics server will listen.
	I0810 22:34:10.607412    4347 command_runner.go:124] > metrics_port = 9090
	I0810 22:34:10.607435    4347 command_runner.go:124] > # Local socket path to bind the metrics server to
	I0810 22:34:10.607442    4347 command_runner.go:124] > metrics_socket = ""
	I0810 22:34:10.607505    4347 cni.go:93] Creating CNI manager for ""
	I0810 22:34:10.607516    4347 cni.go:154] 2 nodes found, recommending kindnet
	I0810 22:34:10.607526    4347 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0810 22:34:10.607539    4347 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.251 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-20210810223223-30291 NodeName:multinode-20210810223223-30291-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.50.251 CgroupDriver:systemd ClientCAF
ile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0810 22:34:10.607651    4347 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.251
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "multinode-20210810223223-30291-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.251
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0810 22:34:10.607723    4347 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --hostname-override=multinode-20210810223223-30291-m02 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.251 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:multinode-20210810223223-30291 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0810 22:34:10.607774    4347 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0810 22:34:10.615456    4347 command_runner.go:124] > kubeadm
	I0810 22:34:10.615473    4347 command_runner.go:124] > kubectl
	I0810 22:34:10.615478    4347 command_runner.go:124] > kubelet
	I0810 22:34:10.615657    4347 binaries.go:44] Found k8s binaries, skipping transfer
	I0810 22:34:10.615722    4347 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0810 22:34:10.622468    4347 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (515 bytes)
	I0810 22:34:10.634378    4347 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0810 22:34:10.646471    4347 ssh_runner.go:149] Run: grep 192.168.50.32	control-plane.minikube.internal$ /etc/hosts
	I0810 22:34:10.650829    4347 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.32	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0810 22:34:10.661317    4347 host.go:66] Checking if "multinode-20210810223223-30291" exists ...
	I0810 22:34:10.661683    4347 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:34:10.661730    4347 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:34:10.673179    4347 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:38391
	I0810 22:34:10.673630    4347 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:34:10.674120    4347 main.go:130] libmachine: Using API Version  1
	I0810 22:34:10.674143    4347 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:34:10.674451    4347 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:34:10.674631    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .DriverName
	I0810 22:34:10.674745    4347 start.go:241] JoinCluster: &{Name:multinode-20210810223223-30291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12122/minikube-v1.22.0-1628238775-12122.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:multinode-202
10810223223-30291 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.32 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true} {Name:m02 IP:192.168.50.251 Port:0 KubernetesVersion:v1.21.3 ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0}
	I0810 22:34:10.674843    4347 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm token create --print-join-command --ttl=0"
	I0810 22:34:10.674867    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHHostname
	I0810 22:34:10.680033    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:34:10.680419    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:89", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:32:38 +0000 UTC Type:0 Mac:52:54:00:ce:d8:89 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:multinode-20210810223223-30291 Clientid:01:52:54:00:ce:d8:89}
	I0810 22:34:10.680440    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined IP address 192.168.50.32 and MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:34:10.680578    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHPort
	I0810 22:34:10.680742    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHKeyPath
	I0810 22:34:10.680874    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHUsername
	I0810 22:34:10.681009    4347 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223223-30291/id_rsa Username:docker}
	I0810 22:34:12.440322    4347 command_runner.go:124] > kubeadm join control-plane.minikube.internal:8443 --token gytvgv.ywx6mbvc36673jsk --discovery-token-ca-cert-hash sha256:792de24c5d5a120bf4aa3a25755c9ac1b4ccaeb2dbca2444b5b705903a56bd34 
	I0810 22:34:12.440364    4347 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm token create --print-join-command --ttl=0": (1.765507011s)
	I0810 22:34:12.440402    4347 start.go:262] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.50.251 Port:0 KubernetesVersion:v1.21.3 ControlPlane:false Worker:true}
	I0810 22:34:12.440501    4347 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm join control-plane.minikube.internal:8443 --token gytvgv.ywx6mbvc36673jsk --discovery-token-ca-cert-hash sha256:792de24c5d5a120bf4aa3a25755c9ac1b4ccaeb2dbca2444b5b705903a56bd34 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-20210810223223-30291-m02"
	I0810 22:34:12.584570    4347 command_runner.go:124] > [preflight] Running pre-flight checks
	I0810 22:34:12.935834    4347 command_runner.go:124] > [preflight] Reading configuration from the cluster...
	I0810 22:34:12.935906    4347 command_runner.go:124] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0810 22:34:12.988878    4347 command_runner.go:124] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0810 22:34:12.989678    4347 command_runner.go:124] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0810 22:34:12.989758    4347 command_runner.go:124] > [kubelet-start] Starting the kubelet
	I0810 22:34:13.147943    4347 command_runner.go:124] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0810 22:34:19.235875    4347 command_runner.go:124] > This node has joined the cluster:
	I0810 22:34:19.235906    4347 command_runner.go:124] > * Certificate signing request was sent to apiserver and a response was received.
	I0810 22:34:19.235916    4347 command_runner.go:124] > * The Kubelet was informed of the new secure connection details.
	I0810 22:34:19.235926    4347 command_runner.go:124] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0810 22:34:19.238240    4347 command_runner.go:124] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0810 22:34:19.238276    4347 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm join control-plane.minikube.internal:8443 --token gytvgv.ywx6mbvc36673jsk --discovery-token-ca-cert-hash sha256:792de24c5d5a120bf4aa3a25755c9ac1b4ccaeb2dbca2444b5b705903a56bd34 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-20210810223223-30291-m02": (6.797755438s)
	I0810 22:34:19.238299    4347 ssh_runner.go:149] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0810 22:34:19.590115    4347 command_runner.go:124] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0810 22:34:19.590518    4347 start.go:243] JoinCluster complete in 8.915767718s
	I0810 22:34:19.590547    4347 cni.go:93] Creating CNI manager for ""
	I0810 22:34:19.590556    4347 cni.go:154] 2 nodes found, recommending kindnet
	I0810 22:34:19.590625    4347 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0810 22:34:19.596465    4347 command_runner.go:124] >   File: /opt/cni/bin/portmap
	I0810 22:34:19.596488    4347 command_runner.go:124] >   Size: 2853400   	Blocks: 5576       IO Block: 4096   regular file
	I0810 22:34:19.596499    4347 command_runner.go:124] > Device: 10h/16d	Inode: 22873       Links: 1
	I0810 22:34:19.596506    4347 command_runner.go:124] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0810 22:34:19.596511    4347 command_runner.go:124] > Access: 2021-08-10 22:32:38.220478056 +0000
	I0810 22:34:19.596517    4347 command_runner.go:124] > Modify: 2021-08-06 09:23:24.000000000 +0000
	I0810 22:34:19.596522    4347 command_runner.go:124] > Change: 2021-08-10 22:32:33.951478056 +0000
	I0810 22:34:19.596526    4347 command_runner.go:124] >  Birth: -
	I0810 22:34:19.596569    4347 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0810 22:34:19.596579    4347 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0810 22:34:19.609756    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0810 22:34:19.905247    4347 command_runner.go:124] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0810 22:34:19.907679    4347 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0810 22:34:19.910391    4347 command_runner.go:124] > serviceaccount/kindnet unchanged
	I0810 22:34:19.923321    4347 command_runner.go:124] > daemonset.apps/kindnet configured
	I0810 22:34:19.926489    4347 start.go:226] Will wait 6m0s for node &{Name:m02 IP:192.168.50.251 Port:0 KubernetesVersion:v1.21.3 ControlPlane:false Worker:true}
	I0810 22:34:19.928537    4347 out.go:177] * Verifying Kubernetes components...
	I0810 22:34:19.928610    4347 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0810 22:34:19.940193    4347 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0810 22:34:19.940424    4347 kapi.go:59] client config for multinode-20210810223223-30291: &rest.Config{Host:"https://192.168.50.32:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-302
91/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2660), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0810 22:34:19.941599    4347 node_ready.go:35] waiting up to 6m0s for node "multinode-20210810223223-30291-m02" to be "Ready" ...
	I0810 22:34:19.941667    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291-m02
	I0810 22:34:19.941675    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:19.941680    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:19.941687    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:19.944972    4347 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:34:19.944992    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:19.944999    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:19.945005    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:19.945009    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:19.945015    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:19.945020    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:19 GMT
	I0810 22:34:19.945532    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291-m02","uid":"2bb9e107-eff6-4e4c-b421-18b111080a9d","resourceVersion":"555","creationTimestamp":"2021-08-10T22:34:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021
-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f: [truncated 5451 chars]
	I0810 22:34:20.446647    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291-m02
	I0810 22:34:20.446674    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:20.446682    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:20.446688    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:20.450862    4347 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0810 22:34:20.450884    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:20.450901    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:20.450905    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:20.450910    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:20.450914    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:20.450919    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:20 GMT
	I0810 22:34:20.451592    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291-m02","uid":"2bb9e107-eff6-4e4c-b421-18b111080a9d","resourceVersion":"555","creationTimestamp":"2021-08-10T22:34:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021
-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f: [truncated 5451 chars]
	I0810 22:34:20.946943    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291-m02
	I0810 22:34:20.946971    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:20.946977    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:20.946981    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:20.949838    4347 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:34:20.949861    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:20.949868    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:20.949872    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:20.949877    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:20.949881    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:20 GMT
	I0810 22:34:20.949886    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:20.950372    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291-m02","uid":"2bb9e107-eff6-4e4c-b421-18b111080a9d","resourceVersion":"555","creationTimestamp":"2021-08-10T22:34:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021
-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f: [truncated 5451 chars]
	I0810 22:34:21.446883    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291-m02
	I0810 22:34:21.446905    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:21.446912    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:21.446916    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:21.449404    4347 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:34:21.449423    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:21.449429    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:21.449434    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:21.449438    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:21.449443    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:21 GMT
	I0810 22:34:21.449462    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:21.450290    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291-m02","uid":"2bb9e107-eff6-4e4c-b421-18b111080a9d","resourceVersion":"555","creationTimestamp":"2021-08-10T22:34:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021
-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f: [truncated 5451 chars]
	I0810 22:34:21.947032    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291-m02
	I0810 22:34:21.947060    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:21.947066    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:21.947070    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:21.951136    4347 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0810 22:34:21.951159    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:21.951166    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:21.951170    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:21.951175    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:21.951186    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:21.951191    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:21 GMT
	I0810 22:34:21.951430    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291-m02","uid":"2bb9e107-eff6-4e4c-b421-18b111080a9d","resourceVersion":"555","creationTimestamp":"2021-08-10T22:34:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021
-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f: [truncated 5451 chars]
	I0810 22:34:21.951742    4347 node_ready.go:58] node "multinode-20210810223223-30291-m02" has status "Ready":"False"
	I0810 22:34:22.446092    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291-m02
	I0810 22:34:22.446122    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:22.446128    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:22.446133    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:22.448504    4347 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:34:22.448522    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:22.448527    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:22 GMT
	I0810 22:34:22.448531    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:22.448534    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:22.448536    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:22.448539    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:22.448707    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291-m02","uid":"2bb9e107-eff6-4e4c-b421-18b111080a9d","resourceVersion":"555","creationTimestamp":"2021-08-10T22:34:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021
-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f: [truncated 5451 chars]
	I0810 22:34:22.946267    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291-m02
	I0810 22:34:22.946293    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:22.946299    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:22.946304    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:22.949527    4347 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:34:22.949552    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:22.949559    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:22.949564    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:22.949569    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:22.949573    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:22.949606    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:22 GMT
	I0810 22:34:22.950687    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291-m02","uid":"2bb9e107-eff6-4e4c-b421-18b111080a9d","resourceVersion":"555","creationTimestamp":"2021-08-10T22:34:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021
-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f: [truncated 5451 chars]
	I0810 22:34:23.446606    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291-m02
	I0810 22:34:23.446694    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:23.446702    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:23.446706    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:23.449234    4347 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:34:23.449256    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:23.449262    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:23.449265    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:23.449268    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:23.449271    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:23.449274    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:23 GMT
	I0810 22:34:23.449411    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291-m02","uid":"2bb9e107-eff6-4e4c-b421-18b111080a9d","resourceVersion":"570","creationTimestamp":"2021-08-10T22:34:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5560 chars]
	I0810 22:34:23.946751    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291-m02
	I0810 22:34:23.946777    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:23.946784    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:23.946788    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:23.950162    4347 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:34:23.950185    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:23.950192    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:23.950197    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:23.950209    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:23.950214    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:23 GMT
	I0810 22:34:23.950218    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:23.950354    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291-m02","uid":"2bb9e107-eff6-4e4c-b421-18b111080a9d","resourceVersion":"570","creationTimestamp":"2021-08-10T22:34:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5560 chars]
	I0810 22:34:24.446334    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291-m02
	I0810 22:34:24.446361    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:24.446366    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:24.446371    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:24.449289    4347 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:34:24.449303    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:24.449308    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:24.449313    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:24.449318    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:24.449322    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:24.449326    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:24 GMT
	I0810 22:34:24.449672    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291-m02","uid":"2bb9e107-eff6-4e4c-b421-18b111080a9d","resourceVersion":"570","creationTimestamp":"2021-08-10T22:34:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5560 chars]
	I0810 22:34:24.449928    4347 node_ready.go:58] node "multinode-20210810223223-30291-m02" has status "Ready":"False"
	I0810 22:34:24.946329    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291-m02
	I0810 22:34:24.946353    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:24.946359    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:24.946363    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:24.949643    4347 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:34:24.949666    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:24.949671    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:24 GMT
	I0810 22:34:24.949675    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:24.949679    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:24.949684    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:24.949690    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:24.949838    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291-m02","uid":"2bb9e107-eff6-4e4c-b421-18b111080a9d","resourceVersion":"570","creationTimestamp":"2021-08-10T22:34:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5560 chars]
	I0810 22:34:25.446488    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291-m02
	I0810 22:34:25.446515    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:25.446529    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:25.446535    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:25.455238    4347 round_trippers.go:457] Response Status: 200 OK in 8 milliseconds
	I0810 22:34:25.455261    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:25.455266    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:25.455270    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:25.455274    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:25.455277    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:25 GMT
	I0810 22:34:25.455280    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:25.455407    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291-m02","uid":"2bb9e107-eff6-4e4c-b421-18b111080a9d","resourceVersion":"570","creationTimestamp":"2021-08-10T22:34:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5560 chars]
	I0810 22:34:25.946504    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291-m02
	I0810 22:34:25.946533    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:25.946541    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:25.946547    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:25.950465    4347 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:34:25.950492    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:25.950496    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:25.950500    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:25.950503    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:25.950505    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:25.950511    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:25 GMT
	I0810 22:34:25.950591    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291-m02","uid":"2bb9e107-eff6-4e4c-b421-18b111080a9d","resourceVersion":"570","creationTimestamp":"2021-08-10T22:34:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5560 chars]
	I0810 22:34:26.446726    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291-m02
	I0810 22:34:26.446753    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:26.446759    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:26.446764    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:26.456593    4347 round_trippers.go:457] Response Status: 200 OK in 9 milliseconds
	I0810 22:34:26.456621    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:26.456627    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:26.456633    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:26.456638    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:26.456642    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:26 GMT
	I0810 22:34:26.456647    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:26.456804    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291-m02","uid":"2bb9e107-eff6-4e4c-b421-18b111080a9d","resourceVersion":"570","creationTimestamp":"2021-08-10T22:34:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5560 chars]
	I0810 22:34:26.457129    4347 node_ready.go:58] node "multinode-20210810223223-30291-m02" has status "Ready":"False"
	I0810 22:34:26.946418    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291-m02
	I0810 22:34:26.946441    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:26.946447    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:26.946451    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:26.949663    4347 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:34:26.949679    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:26.949683    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:26.949686    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:26.949689    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:26 GMT
	I0810 22:34:26.949692    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:26.949695    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:26.949882    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291-m02","uid":"2bb9e107-eff6-4e4c-b421-18b111080a9d","resourceVersion":"570","creationTimestamp":"2021-08-10T22:34:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5560 chars]
	I0810 22:34:27.446543    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291-m02
	I0810 22:34:27.446570    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:27.446576    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:27.446580    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:27.449983    4347 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:34:27.450002    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:27.450008    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:27.450014    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:27.450019    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:27.450023    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:27.450027    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:27 GMT
	I0810 22:34:27.451521    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291-m02","uid":"2bb9e107-eff6-4e4c-b421-18b111080a9d","resourceVersion":"570","creationTimestamp":"2021-08-10T22:34:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5560 chars]
	I0810 22:34:27.946159    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291-m02
	I0810 22:34:27.946186    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:27.946192    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:27.946196    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:27.949443    4347 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:34:27.949465    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:27.949471    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:27.949476    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:27.949479    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:27.949482    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:27.949486    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:27 GMT
	I0810 22:34:27.949575    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291-m02","uid":"2bb9e107-eff6-4e4c-b421-18b111080a9d","resourceVersion":"570","creationTimestamp":"2021-08-10T22:34:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5560 chars]
	I0810 22:34:28.446142    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291-m02
	I0810 22:34:28.446169    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:28.446176    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:28.446180    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:28.450639    4347 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0810 22:34:28.450657    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:28.450663    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:28.450668    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:28.450672    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:28.450677    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:28.450682    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:28 GMT
	I0810 22:34:28.451657    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291-m02","uid":"2bb9e107-eff6-4e4c-b421-18b111080a9d","resourceVersion":"570","creationTimestamp":"2021-08-10T22:34:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5560 chars]
	I0810 22:34:28.946854    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291-m02
	I0810 22:34:28.946878    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:28.946885    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:28.946889    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:28.950372    4347 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:34:28.950384    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:28.950388    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:28.950392    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:28.950396    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:28 GMT
	I0810 22:34:28.950399    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:28.950402    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:28.950500    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291-m02","uid":"2bb9e107-eff6-4e4c-b421-18b111080a9d","resourceVersion":"570","creationTimestamp":"2021-08-10T22:34:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5560 chars]
	I0810 22:34:28.950822    4347 node_ready.go:58] node "multinode-20210810223223-30291-m02" has status "Ready":"False"
	I0810 22:34:29.446578    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291-m02
	I0810 22:34:29.446602    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:29.446608    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:29.446612    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:29.450473    4347 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:34:29.450487    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:29.450493    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:29.450497    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:29.450502    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:29.450514    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:29.450518    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:29 GMT
	I0810 22:34:29.450703    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291-m02","uid":"2bb9e107-eff6-4e4c-b421-18b111080a9d","resourceVersion":"583","creationTimestamp":"2021-08-10T22:34:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metada
ta":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{" [truncated 5733 chars]
	I0810 22:34:29.450984    4347 node_ready.go:49] node "multinode-20210810223223-30291-m02" has status "Ready":"True"
	I0810 22:34:29.451005    4347 node_ready.go:38] duration metric: took 9.509386037s waiting for node "multinode-20210810223223-30291-m02" to be "Ready" ...
	I0810 22:34:29.451017    4347 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0810 22:34:29.451103    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods
	I0810 22:34:29.451116    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:29.451123    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:29.451129    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:29.457673    4347 round_trippers.go:457] Response Status: 200 OK in 6 milliseconds
	I0810 22:34:29.457690    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:29.457696    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:29.457700    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:29.457704    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:29 GMT
	I0810 22:34:29.457708    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:29.457712    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:29.461022    4347 request.go:1123] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"584"},"items":[{"metadata":{"name":"coredns-558bd4d5db-v7x6p","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"0c4eb44b-9d97-4934-aa16-8b8625bf04cf","resourceVersion":"493","creationTimestamp":"2021-08-10T22:33:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"28236a2d-7d69-4771-a778-5fae1cd7d05f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:33:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"28236a2d-7d69-4771-a778-5fae1cd7d05f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller"
:{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k: [truncated 66681 chars]
	I0810 22:34:29.463278    4347 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-v7x6p" in "kube-system" namespace to be "Ready" ...
	I0810 22:34:29.463397    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-v7x6p
	I0810 22:34:29.463409    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:29.463416    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:29.463453    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:29.466000    4347 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:34:29.466016    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:29.466022    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:29.466026    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:29.466030    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:29.466035    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:29.466039    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:29 GMT
	I0810 22:34:29.466210    4347 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-v7x6p","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"0c4eb44b-9d97-4934-aa16-8b8625bf04cf","resourceVersion":"493","creationTimestamp":"2021-08-10T22:33:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"28236a2d-7d69-4771-a778-5fae1cd7d05f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:33:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"28236a2d-7d69-4771-a778-5fae1cd7d05f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5733 chars]
	I0810 22:34:29.466491    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291
	I0810 22:34:29.466502    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:29.466507    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:29.466512    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:29.468870    4347 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:34:29.468890    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:29.468896    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:29.468900    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:29.468905    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:29.468909    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:29.468914    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:29 GMT
	I0810 22:34:29.469187    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","resourceVersion":"406","creationTimestamp":"2021-08-10T22:33:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223223-30291","minikube.k8s.io/updated_at":"2021_08_10T22_33_20_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22 [truncated 6553 chars]
	I0810 22:34:29.469392    4347 pod_ready.go:92] pod "coredns-558bd4d5db-v7x6p" in "kube-system" namespace has status "Ready":"True"
	I0810 22:34:29.469404    4347 pod_ready.go:81] duration metric: took 6.097748ms waiting for pod "coredns-558bd4d5db-v7x6p" in "kube-system" namespace to be "Ready" ...
	I0810 22:34:29.469413    4347 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-20210810223223-30291" in "kube-system" namespace to be "Ready" ...
	I0810 22:34:29.469452    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210810223223-30291
	I0810 22:34:29.469460    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:29.469464    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:29.469468    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:29.471255    4347 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:34:29.471270    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:29.471276    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:29.471280    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:29.471285    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:29.471289    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:29 GMT
	I0810 22:34:29.471296    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:29.471528    4347 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210810223223-30291","namespace":"kube-system","uid":"8498143e-4386-44bc-9541-3193bd504c1d","resourceVersion":"489","creationTimestamp":"2021-08-10T22:33:25Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.50.32:2379","kubernetes.io/config.hash":"ee4e4232c1192224bf90edfa1030cde5","kubernetes.io/config.mirror":"ee4e4232c1192224bf90edfa1030cde5","kubernetes.io/config.seen":"2021-08-10T22:33:24.968043630Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:33:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash [truncated 5569 chars]
	I0810 22:34:29.471869    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291
	I0810 22:34:29.471891    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:29.471898    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:29.471903    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:29.473894    4347 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:34:29.473910    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:29.473915    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:29.473919    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:29.473922    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:29 GMT
	I0810 22:34:29.473925    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:29.473927    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:29.474170    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","resourceVersion":"406","creationTimestamp":"2021-08-10T22:33:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223223-30291","minikube.k8s.io/updated_at":"2021_08_10T22_33_20_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22 [truncated 6553 chars]
	I0810 22:34:29.474420    4347 pod_ready.go:92] pod "etcd-multinode-20210810223223-30291" in "kube-system" namespace has status "Ready":"True"
	I0810 22:34:29.474433    4347 pod_ready.go:81] duration metric: took 5.014362ms waiting for pod "etcd-multinode-20210810223223-30291" in "kube-system" namespace to be "Ready" ...
	I0810 22:34:29.474444    4347 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-20210810223223-30291" in "kube-system" namespace to be "Ready" ...
	I0810 22:34:29.474485    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20210810223223-30291
	I0810 22:34:29.474493    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:29.474497    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:29.474501    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:29.476940    4347 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:34:29.476953    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:29.476957    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:29.476961    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:29.476963    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:29.476967    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:29.476969    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:29 GMT
	I0810 22:34:29.477308    4347 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20210810223223-30291","namespace":"kube-system","uid":"1c83d52d-8a08-42be-9c8a-6420a1bdb75c","resourceVersion":"317","creationTimestamp":"2021-08-10T22:33:17Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.50.32:8443","kubernetes.io/config.hash":"9099813bef5425d688516ac434247f4d","kubernetes.io/config.mirror":"9099813bef5425d688516ac434247f4d","kubernetes.io/config.seen":"2021-08-10T22:33:07.454085484Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:33:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address [truncated 7249 chars]
	I0810 22:34:29.477637    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291
	I0810 22:34:29.477652    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:29.477658    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:29.477664    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:29.479658    4347 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:34:29.479670    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:29.479680    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:29.479687    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:29.479691    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:29.479695    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:29.479698    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:29 GMT
	I0810 22:34:29.480212    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","resourceVersion":"406","creationTimestamp":"2021-08-10T22:33:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223223-30291","minikube.k8s.io/updated_at":"2021_08_10T22_33_20_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22 [truncated 6553 chars]
	I0810 22:34:29.480444    4347 pod_ready.go:92] pod "kube-apiserver-multinode-20210810223223-30291" in "kube-system" namespace has status "Ready":"True"
	I0810 22:34:29.480457    4347 pod_ready.go:81] duration metric: took 6.006922ms waiting for pod "kube-apiserver-multinode-20210810223223-30291" in "kube-system" namespace to be "Ready" ...
	I0810 22:34:29.480468    4347 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-20210810223223-30291" in "kube-system" namespace to be "Ready" ...
	I0810 22:34:29.480512    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210810223223-30291
	I0810 22:34:29.480524    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:29.480533    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:29.480544    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:29.483629    4347 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:34:29.483645    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:29.483650    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:29.483655    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:29.483660    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:29.483664    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:29.483668    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:29 GMT
	I0810 22:34:29.484145    4347 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20210810223223-30291","namespace":"kube-system","uid":"9305e895-2f70-44a4-8319-6f50b7e7a0ce","resourceVersion":"456","creationTimestamp":"2021-08-10T22:33:25Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"77761625d867cf54e5130d9def04b55c","kubernetes.io/config.mirror":"77761625d867cf54e5130d9def04b55c","kubernetes.io/config.seen":"2021-08-10T22:33:24.968061293Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:33:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi
g.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config.s [truncated 6810 chars]
	I0810 22:34:29.484456    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291
	I0810 22:34:29.484472    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:29.484477    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:29.484481    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:29.486294    4347 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:34:29.486307    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:29.486313    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:29.486318    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:29.486322    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:29.486327    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:29.486332    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:29 GMT
	I0810 22:34:29.486566    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","resourceVersion":"406","creationTimestamp":"2021-08-10T22:33:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223223-30291","minikube.k8s.io/updated_at":"2021_08_10T22_33_20_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22 [truncated 6553 chars]
	I0810 22:34:29.486830    4347 pod_ready.go:92] pod "kube-controller-manager-multinode-20210810223223-30291" in "kube-system" namespace has status "Ready":"True"
	I0810 22:34:29.486845    4347 pod_ready.go:81] duration metric: took 6.367836ms waiting for pod "kube-controller-manager-multinode-20210810223223-30291" in "kube-system" namespace to be "Ready" ...
	I0810 22:34:29.486853    4347 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6t6mb" in "kube-system" namespace to be "Ready" ...
	I0810 22:34:29.647221    4347 request.go:600] Waited for 160.30517ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6t6mb
	I0810 22:34:29.647293    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6t6mb
	I0810 22:34:29.647309    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:29.647317    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:29.647324    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:29.650249    4347 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:34:29.650265    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:29.650272    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:29.650276    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:29.650281    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:29 GMT
	I0810 22:34:29.650287    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:29.650291    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:29.650445    4347 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6t6mb","generateName":"kube-proxy-","namespace":"kube-system","uid":"22159811-3bd2-4e80-94b1-f3bef037909c","resourceVersion":"565","creationTimestamp":"2021-08-10T22:34:19Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"3eb0224f-214a-4d5e-ba63-b7b722448d21","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3eb0224f-214a-4d5e-ba63-b7b722448d21\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 5770 chars]
	I0810 22:34:29.847176    4347 request.go:600] Waited for 196.356548ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291-m02
	I0810 22:34:29.847236    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291-m02
	I0810 22:34:29.847241    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:29.847247    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:29.847251    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:29.851548    4347 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0810 22:34:29.851564    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:29.851569    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:29.851574    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:29.851578    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:29.851582    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:29.851587    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:29 GMT
	I0810 22:34:29.851741    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291-m02","uid":"2bb9e107-eff6-4e4c-b421-18b111080a9d","resourceVersion":"583","creationTimestamp":"2021-08-10T22:34:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metada
ta":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{" [truncated 5733 chars]
	I0810 22:34:29.851998    4347 pod_ready.go:92] pod "kube-proxy-6t6mb" in "kube-system" namespace has status "Ready":"True"
	I0810 22:34:29.852014    4347 pod_ready.go:81] duration metric: took 365.151653ms waiting for pod "kube-proxy-6t6mb" in "kube-system" namespace to be "Ready" ...
	I0810 22:34:29.852026    4347 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lmhw9" in "kube-system" namespace to be "Ready" ...
	I0810 22:34:30.047441    4347 request.go:600] Waited for 195.34571ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lmhw9
	I0810 22:34:30.047512    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lmhw9
	I0810 22:34:30.047517    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:30.047522    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:30.047526    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:30.051037    4347 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:34:30.051058    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:30.051066    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:30.051074    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:30.051078    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:30.051083    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:30.051087    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:30 GMT
	I0810 22:34:30.051362    4347 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lmhw9","generateName":"kube-proxy-","namespace":"kube-system","uid":"2a10306d-93c9-4aac-b47a-8bd1d406882c","resourceVersion":"470","creationTimestamp":"2021-08-10T22:33:33Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"3eb0224f-214a-4d5e-ba63-b7b722448d21","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:33:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3eb0224f-214a-4d5e-ba63-b7b722448d21\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 5758 chars]
	I0810 22:34:30.247067    4347 request.go:600] Waited for 195.352918ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291
	I0810 22:34:30.247139    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291
	I0810 22:34:30.247146    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:30.247154    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:30.247159    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:30.250034    4347 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:34:30.250054    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:30.250059    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:30 GMT
	I0810 22:34:30.250062    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:30.250065    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:30.250068    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:30.250071    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:30.250329    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","resourceVersion":"406","creationTimestamp":"2021-08-10T22:33:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223223-30291","minikube.k8s.io/updated_at":"2021_08_10T22_33_20_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22 [truncated 6553 chars]
	I0810 22:34:30.250647    4347 pod_ready.go:92] pod "kube-proxy-lmhw9" in "kube-system" namespace has status "Ready":"True"
	I0810 22:34:30.250658    4347 pod_ready.go:81] duration metric: took 398.625584ms waiting for pod "kube-proxy-lmhw9" in "kube-system" namespace to be "Ready" ...
	I0810 22:34:30.250667    4347 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-20210810223223-30291" in "kube-system" namespace to be "Ready" ...
	I0810 22:34:30.447129    4347 request.go:600] Waited for 196.375462ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210810223223-30291
	I0810 22:34:30.447205    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210810223223-30291
	I0810 22:34:30.447213    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:30.447228    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:30.447241    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:30.450665    4347 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:34:30.450685    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:30.450690    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:30.450693    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:30.450696    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:30 GMT
	I0810 22:34:30.450699    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:30.450703    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:30.451886    4347 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20210810223223-30291","namespace":"kube-system","uid":"5a7e6aa0-3e54-4877-a2b0-79df1e84d9f7","resourceVersion":"295","creationTimestamp":"2021-08-10T22:33:25Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"18b06801ebf2048d768b73e098da8a40","kubernetes.io/config.mirror":"18b06801ebf2048d768b73e098da8a40","kubernetes.io/config.seen":"2021-08-10T22:33:24.968063579Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:33:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:ku
bernetes.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labe [truncated 4540 chars]
	I0810 22:34:30.647608    4347 request.go:600] Waited for 195.351688ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291
	I0810 22:34:30.647669    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291
	I0810 22:34:30.647676    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:30.647683    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:30.647691    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:30.696789    4347 round_trippers.go:457] Response Status: 200 OK in 49 milliseconds
	I0810 22:34:30.696818    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:30.696825    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:30.696828    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:30 GMT
	I0810 22:34:30.696832    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:30.696835    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:30.696838    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:30.697222    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","resourceVersion":"406","creationTimestamp":"2021-08-10T22:33:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223223-30291","minikube.k8s.io/updated_at":"2021_08_10T22_33_20_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22 [truncated 6553 chars]
	I0810 22:34:30.697665    4347 pod_ready.go:92] pod "kube-scheduler-multinode-20210810223223-30291" in "kube-system" namespace has status "Ready":"True"
	I0810 22:34:30.697686    4347 pod_ready.go:81] duration metric: took 447.011787ms waiting for pod "kube-scheduler-multinode-20210810223223-30291" in "kube-system" namespace to be "Ready" ...
	I0810 22:34:30.697702    4347 pod_ready.go:38] duration metric: took 1.246665582s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0810 22:34:30.697735    4347 system_svc.go:44] waiting for kubelet service to be running ....
	I0810 22:34:30.697793    4347 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0810 22:34:30.709069    4347 system_svc.go:56] duration metric: took 11.32472ms WaitForService to wait for kubelet.
	I0810 22:34:30.709095    4347 kubeadm.go:547] duration metric: took 10.78256952s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0810 22:34:30.709123    4347 node_conditions.go:102] verifying NodePressure condition ...
	I0810 22:34:30.847492    4347 request.go:600] Waited for 138.293842ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.50.32:8443/api/v1/nodes
	I0810 22:34:30.847563    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes
	I0810 22:34:30.847598    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:30.847615    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:30.847626    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:30.850811    4347 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:34:30.850837    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:30.850845    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:30.850851    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:30.850855    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:30 GMT
	I0810 22:34:30.850859    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:30.850865    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:30.851288    4347 request.go:1123] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"585"},"items":[{"metadata":{"name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","resourceVersion":"406","creationTimestamp":"2021-08-10T22:33:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223223-30291","minikube.k8s.io/updated_at":"2021_08_10T22_33_20_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed
-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operatio [truncated 13331 chars]
	I0810 22:34:30.851857    4347 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0810 22:34:30.851878    4347 node_conditions.go:123] node cpu capacity is 2
	I0810 22:34:30.851895    4347 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0810 22:34:30.851902    4347 node_conditions.go:123] node cpu capacity is 2
	I0810 22:34:30.851908    4347 node_conditions.go:105] duration metric: took 142.779874ms to run NodePressure ...
	I0810 22:34:30.851926    4347 start.go:231] waiting for startup goroutines ...
	I0810 22:34:30.894335    4347 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0810 22:34:30.896622    4347 out.go:177] * Done! kubectl is now configured to use "multinode-20210810223223-30291" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Tue 2021-08-10 22:32:34 UTC, end at Tue 2021-08-10 22:37:41 UTC. --
	Aug 10 22:37:40 multinode-20210810223223-30291 crio[2074]: time="2021-08-10 22:37:40.850733690Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=383722cb-87a0-4197-97b0-867f21945238 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:37:40 multinode-20210810223223-30291 crio[2074]: time="2021-08-10 22:37:40.850938118Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0546f83cfad21f66789e8c6c45beb48deb7e73bfb0f09ffb76be32add922c472,PodSandboxId:77754eaffbc8999d0eee55167b99849548aeceee68e42223821dde6df9b71f5f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,State:CONTAINER_RUNNING,CreatedAt:1628634877804702101,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-84b6686758-5h7gq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d11e1a39-be6d-4d16-9086-b6cfef5e1644,},Annotations:map[string]string{io.kubernetes.container.hash: c6d8bc46,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f90af0c728851e601b452c99af417f5bfcd580b5355c6e8bb56fa2c701aacb7,PodSandboxId:f2a2f0197075a11d02804fe88085db4f127993180baa17934f4f244fbc5c7a8c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1,State:CONTAINER_RUNNING,CreatedAt:1628634817927932495,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2bvdc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c26b9021-1d86-475c-ac98-6f7e7e07c434,},Annotations:map[string]string{io.kubernetes.container.hash: 4970c570,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfebebba90bf30cb1d511f33375a3e9b6bd91ddc07cabfe5b5589a14fe35685c,PodSandboxId:552c60b72eff7ec71d413674f557175d18290855b2fbb557210d2ac15208a685,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628634817667541294,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af946d1d-fa19-47fa-8c83-fd1d06a0e788,},Annotations:map[string]string{io.kubernetes.container.hash: 31e4f950,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea6ca1aa48dc2cf026c5ed8947bfb8ef90ac7ac3d0f88bee1bc2b22511e83b4,PodSandboxId:fbff0e39503c70d012b363b8fe670ce8a9f3577e7e15e998600a2fa8b6670222,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628634816892860051,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-v7x6p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c4eb44b-9d97-4934-aa16-8b8625bf04cf,},Annotations:map[string]string{io.kubernetes.container.hash: e2b5faf8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dn
s-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:450a88f25c78b26558e59b4e708a2af565fa95ffdc04a9cb0e91b46224c6573d,PodSandboxId:2b47855215e9f9a81ed9a31f5e73478cd23463ae8391753d3b15909daf03e2bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628634815170944830,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lmhw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a10306d-93c9-4aac-b47a-8bd1d
406882c,},Annotations:map[string]string{io.kubernetes.container.hash: a3c465e1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b071c54f171f451e6e33cdbfbddab63c53133700bfe5dbff374f9d207f00facf,PodSandboxId:8370bc0b5b9a970d4007204b3b67a7266d452619c4263328f4a8403fa748ea90,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628634790862838159,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18b06801ebf2048d768b73e098da
8a40,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d8b11c78d38735ea0d95a1ef9a0812af3d2ecb679a807514125823d90f0f456,PodSandboxId:76613b5ae865f77eb677f7fc6c1d96c89c385010f19742b2a2daaf3b3d536da2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628634790776364691,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 77761625d867cf54e5130d9def04b55c,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b22cef1088cdfa5b9c5d2f6974fa60c0edf2132c0881c9c61984c16c2864f97,PodSandboxId:2d4522aed7a375e49e9a8b17d6da385d86a65fcbac37cb6f75c688d6842598cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628634790563156773,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
099813bef5425d688516ac434247f4d,},Annotations:map[string]string{io.kubernetes.container.hash: 96b9c909,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d359668a0c2391fdfb2cd9dcf67ffaf1c30ce8549c23d047c95ef4a75f2ada,PodSandboxId:42a0fc97ea8eb49918d4f19e7d3b3837f13d12aa3c1b7f8c191d955c9ae417e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628634790483989854,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee4e4232c1192224bf90edfa1030cde5,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 755f249c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=383722cb-87a0-4197-97b0-867f21945238 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:37:40 multinode-20210810223223-30291 crio[2074]: time="2021-08-10 22:37:40.888741838Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=39548390-7073-4291-93d8-ebdd766dae08 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:37:40 multinode-20210810223223-30291 crio[2074]: time="2021-08-10 22:37:40.888804088Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=39548390-7073-4291-93d8-ebdd766dae08 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:37:40 multinode-20210810223223-30291 crio[2074]: time="2021-08-10 22:37:40.888983518Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0546f83cfad21f66789e8c6c45beb48deb7e73bfb0f09ffb76be32add922c472,PodSandboxId:77754eaffbc8999d0eee55167b99849548aeceee68e42223821dde6df9b71f5f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,State:CONTAINER_RUNNING,CreatedAt:1628634877804702101,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-84b6686758-5h7gq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d11e1a39-be6d-4d16-9086-b6cfef5e1644,},Annotations:map[string]string{io.kubernetes.container.hash: c6d8bc46,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f90af0c728851e601b452c99af417f5bfcd580b5355c6e8bb56fa2c701aacb7,PodSandboxId:f2a2f0197075a11d02804fe88085db4f127993180baa17934f4f244fbc5c7a8c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1,State:CONTAINER_RUNNING,CreatedAt:1628634817927932495,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2bvdc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c26b9021-1d86-475c-ac98-6f7e7e07c434,},Annotations:map[string]string{io.kubernetes.container.hash: 4970c570,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfebebba90bf30cb1d511f33375a3e9b6bd91ddc07cabfe5b5589a14fe35685c,PodSandboxId:552c60b72eff7ec71d413674f557175d18290855b2fbb557210d2ac15208a685,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628634817667541294,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af946d1d-fa19-47fa-8c83-fd1d06a0e788,},Annotations:map[string]string{io.kubernetes.container.hash: 31e4f950,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea6ca1aa48dc2cf026c5ed8947bfb8ef90ac7ac3d0f88bee1bc2b22511e83b4,PodSandboxId:fbff0e39503c70d012b363b8fe670ce8a9f3577e7e15e998600a2fa8b6670222,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628634816892860051,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-v7x6p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c4eb44b-9d97-4934-aa16-8b8625bf04cf,},Annotations:map[string]string{io.kubernetes.container.hash: e2b5faf8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dn
s-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:450a88f25c78b26558e59b4e708a2af565fa95ffdc04a9cb0e91b46224c6573d,PodSandboxId:2b47855215e9f9a81ed9a31f5e73478cd23463ae8391753d3b15909daf03e2bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628634815170944830,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lmhw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a10306d-93c9-4aac-b47a-8bd1d
406882c,},Annotations:map[string]string{io.kubernetes.container.hash: a3c465e1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b071c54f171f451e6e33cdbfbddab63c53133700bfe5dbff374f9d207f00facf,PodSandboxId:8370bc0b5b9a970d4007204b3b67a7266d452619c4263328f4a8403fa748ea90,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628634790862838159,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18b06801ebf2048d768b73e098da
8a40,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d8b11c78d38735ea0d95a1ef9a0812af3d2ecb679a807514125823d90f0f456,PodSandboxId:76613b5ae865f77eb677f7fc6c1d96c89c385010f19742b2a2daaf3b3d536da2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628634790776364691,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 77761625d867cf54e5130d9def04b55c,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b22cef1088cdfa5b9c5d2f6974fa60c0edf2132c0881c9c61984c16c2864f97,PodSandboxId:2d4522aed7a375e49e9a8b17d6da385d86a65fcbac37cb6f75c688d6842598cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628634790563156773,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
099813bef5425d688516ac434247f4d,},Annotations:map[string]string{io.kubernetes.container.hash: 96b9c909,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d359668a0c2391fdfb2cd9dcf67ffaf1c30ce8549c23d047c95ef4a75f2ada,PodSandboxId:42a0fc97ea8eb49918d4f19e7d3b3837f13d12aa3c1b7f8c191d955c9ae417e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628634790483989854,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee4e4232c1192224bf90edfa1030cde5,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 755f249c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=39548390-7073-4291-93d8-ebdd766dae08 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:37:40 multinode-20210810223223-30291 crio[2074]: time="2021-08-10 22:37:40.927627745Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=06e908e0-64cf-4e25-892f-da03f1ec0684 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:37:40 multinode-20210810223223-30291 crio[2074]: time="2021-08-10 22:37:40.927691662Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=06e908e0-64cf-4e25-892f-da03f1ec0684 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:37:40 multinode-20210810223223-30291 crio[2074]: time="2021-08-10 22:37:40.927918363Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0546f83cfad21f66789e8c6c45beb48deb7e73bfb0f09ffb76be32add922c472,PodSandboxId:77754eaffbc8999d0eee55167b99849548aeceee68e42223821dde6df9b71f5f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,State:CONTAINER_RUNNING,CreatedAt:1628634877804702101,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-84b6686758-5h7gq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d11e1a39-be6d-4d16-9086-b6cfef5e1644,},Annotations:map[string]string{io.kubernetes.container.hash: c6d8bc46,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f90af0c728851e601b452c99af417f5bfcd580b5355c6e8bb56fa2c701aacb7,PodSandboxId:f2a2f0197075a11d02804fe88085db4f127993180baa17934f4f244fbc5c7a8c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1,State:CONTAINER_RUNNING,CreatedAt:1628634817927932495,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2bvdc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c26b9021-1d86-475c-ac98-6f7e7e07c434,},Annotations:map[string]string{io.kubernetes.container.hash: 4970c570,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfebebba90bf30cb1d511f33375a3e9b6bd91ddc07cabfe5b5589a14fe35685c,PodSandboxId:552c60b72eff7ec71d413674f557175d18290855b2fbb557210d2ac15208a685,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628634817667541294,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af946d1d-fa19-47fa-8c83-fd1d06a0e788,},Annotations:map[string]string{io.kubernetes.container.hash: 31e4f950,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea6ca1aa48dc2cf026c5ed8947bfb8ef90ac7ac3d0f88bee1bc2b22511e83b4,PodSandboxId:fbff0e39503c70d012b363b8fe670ce8a9f3577e7e15e998600a2fa8b6670222,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628634816892860051,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-v7x6p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c4eb44b-9d97-4934-aa16-8b8625bf04cf,},Annotations:map[string]string{io.kubernetes.container.hash: e2b5faf8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dn
s-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:450a88f25c78b26558e59b4e708a2af565fa95ffdc04a9cb0e91b46224c6573d,PodSandboxId:2b47855215e9f9a81ed9a31f5e73478cd23463ae8391753d3b15909daf03e2bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628634815170944830,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lmhw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a10306d-93c9-4aac-b47a-8bd1d
406882c,},Annotations:map[string]string{io.kubernetes.container.hash: a3c465e1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b071c54f171f451e6e33cdbfbddab63c53133700bfe5dbff374f9d207f00facf,PodSandboxId:8370bc0b5b9a970d4007204b3b67a7266d452619c4263328f4a8403fa748ea90,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628634790862838159,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18b06801ebf2048d768b73e098da
8a40,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d8b11c78d38735ea0d95a1ef9a0812af3d2ecb679a807514125823d90f0f456,PodSandboxId:76613b5ae865f77eb677f7fc6c1d96c89c385010f19742b2a2daaf3b3d536da2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628634790776364691,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 77761625d867cf54e5130d9def04b55c,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b22cef1088cdfa5b9c5d2f6974fa60c0edf2132c0881c9c61984c16c2864f97,PodSandboxId:2d4522aed7a375e49e9a8b17d6da385d86a65fcbac37cb6f75c688d6842598cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628634790563156773,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
099813bef5425d688516ac434247f4d,},Annotations:map[string]string{io.kubernetes.container.hash: 96b9c909,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d359668a0c2391fdfb2cd9dcf67ffaf1c30ce8549c23d047c95ef4a75f2ada,PodSandboxId:42a0fc97ea8eb49918d4f19e7d3b3837f13d12aa3c1b7f8c191d955c9ae417e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628634790483989854,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee4e4232c1192224bf90edfa1030cde5,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 755f249c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=06e908e0-64cf-4e25-892f-da03f1ec0684 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:37:40 multinode-20210810223223-30291 crio[2074]: time="2021-08-10 22:37:40.964843802Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7e265bdd-92f2-421a-b0e1-b1349e554aee name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:37:40 multinode-20210810223223-30291 crio[2074]: time="2021-08-10 22:37:40.964982368Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7e265bdd-92f2-421a-b0e1-b1349e554aee name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:37:40 multinode-20210810223223-30291 crio[2074]: time="2021-08-10 22:37:40.965196064Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0546f83cfad21f66789e8c6c45beb48deb7e73bfb0f09ffb76be32add922c472,PodSandboxId:77754eaffbc8999d0eee55167b99849548aeceee68e42223821dde6df9b71f5f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,State:CONTAINER_RUNNING,CreatedAt:1628634877804702101,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-84b6686758-5h7gq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d11e1a39-be6d-4d16-9086-b6cfef5e1644,},Annotations:map[string]string{io.kubernetes.container.hash: c6d8bc46,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f90af0c728851e601b452c99af417f5bfcd580b5355c6e8bb56fa2c701aacb7,PodSandboxId:f2a2f0197075a11d02804fe88085db4f127993180baa17934f4f244fbc5c7a8c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1,State:CONTAINER_RUNNING,CreatedAt:1628634817927932495,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2bvdc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c26b9021-1d86-475c-ac98-6f7e7e07c434,},Annotations:map[string]string{io.kubernetes.container.hash: 4970c570,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfebebba90bf30cb1d511f33375a3e9b6bd91ddc07cabfe5b5589a14fe35685c,PodSandboxId:552c60b72eff7ec71d413674f557175d18290855b2fbb557210d2ac15208a685,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628634817667541294,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af946d1d-fa19-47fa-8c83-fd1d06a0e788,},Annotations:map[string]string{io.kubernetes.container.hash: 31e4f950,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea6ca1aa48dc2cf026c5ed8947bfb8ef90ac7ac3d0f88bee1bc2b22511e83b4,PodSandboxId:fbff0e39503c70d012b363b8fe670ce8a9f3577e7e15e998600a2fa8b6670222,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628634816892860051,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-v7x6p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c4eb44b-9d97-4934-aa16-8b8625bf04cf,},Annotations:map[string]string{io.kubernetes.container.hash: e2b5faf8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dn
s-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:450a88f25c78b26558e59b4e708a2af565fa95ffdc04a9cb0e91b46224c6573d,PodSandboxId:2b47855215e9f9a81ed9a31f5e73478cd23463ae8391753d3b15909daf03e2bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628634815170944830,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lmhw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a10306d-93c9-4aac-b47a-8bd1d
406882c,},Annotations:map[string]string{io.kubernetes.container.hash: a3c465e1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b071c54f171f451e6e33cdbfbddab63c53133700bfe5dbff374f9d207f00facf,PodSandboxId:8370bc0b5b9a970d4007204b3b67a7266d452619c4263328f4a8403fa748ea90,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628634790862838159,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18b06801ebf2048d768b73e098da
8a40,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d8b11c78d38735ea0d95a1ef9a0812af3d2ecb679a807514125823d90f0f456,PodSandboxId:76613b5ae865f77eb677f7fc6c1d96c89c385010f19742b2a2daaf3b3d536da2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628634790776364691,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 77761625d867cf54e5130d9def04b55c,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b22cef1088cdfa5b9c5d2f6974fa60c0edf2132c0881c9c61984c16c2864f97,PodSandboxId:2d4522aed7a375e49e9a8b17d6da385d86a65fcbac37cb6f75c688d6842598cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628634790563156773,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
099813bef5425d688516ac434247f4d,},Annotations:map[string]string{io.kubernetes.container.hash: 96b9c909,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d359668a0c2391fdfb2cd9dcf67ffaf1c30ce8549c23d047c95ef4a75f2ada,PodSandboxId:42a0fc97ea8eb49918d4f19e7d3b3837f13d12aa3c1b7f8c191d955c9ae417e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628634790483989854,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee4e4232c1192224bf90edfa1030cde5,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 755f249c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7e265bdd-92f2-421a-b0e1-b1349e554aee name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:37:41 multinode-20210810223223-30291 crio[2074]: time="2021-08-10 22:37:41.008569637Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=97da63bb-ac3f-46d0-8384-bf02e5b03f99 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:37:41 multinode-20210810223223-30291 crio[2074]: time="2021-08-10 22:37:41.008846324Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=97da63bb-ac3f-46d0-8384-bf02e5b03f99 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:37:41 multinode-20210810223223-30291 crio[2074]: time="2021-08-10 22:37:41.009082236Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0546f83cfad21f66789e8c6c45beb48deb7e73bfb0f09ffb76be32add922c472,PodSandboxId:77754eaffbc8999d0eee55167b99849548aeceee68e42223821dde6df9b71f5f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,State:CONTAINER_RUNNING,CreatedAt:1628634877804702101,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-84b6686758-5h7gq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d11e1a39-be6d-4d16-9086-b6cfef5e1644,},Annotations:map[string]string{io.kubernetes.container.hash: c6d8bc46,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f90af0c728851e601b452c99af417f5bfcd580b5355c6e8bb56fa2c701aacb7,PodSandboxId:f2a2f0197075a11d02804fe88085db4f127993180baa17934f4f244fbc5c7a8c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1,State:CONTAINER_RUNNING,CreatedAt:1628634817927932495,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2bvdc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c26b9021-1d86-475c-ac98-6f7e7e07c434,},Annotations:map[string]string{io.kubernetes.container.hash: 4970c570,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfebebba90bf30cb1d511f33375a3e9b6bd91ddc07cabfe5b5589a14fe35685c,PodSandboxId:552c60b72eff7ec71d413674f557175d18290855b2fbb557210d2ac15208a685,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628634817667541294,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af946d1d-fa19-47fa-8c83-fd1d06a0e788,},Annotations:map[string]string{io.kubernetes.container.hash: 31e4f950,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea6ca1aa48dc2cf026c5ed8947bfb8ef90ac7ac3d0f88bee1bc2b22511e83b4,PodSandboxId:fbff0e39503c70d012b363b8fe670ce8a9f3577e7e15e998600a2fa8b6670222,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628634816892860051,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-v7x6p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c4eb44b-9d97-4934-aa16-8b8625bf04cf,},Annotations:map[string]string{io.kubernetes.container.hash: e2b5faf8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dn
s-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:450a88f25c78b26558e59b4e708a2af565fa95ffdc04a9cb0e91b46224c6573d,PodSandboxId:2b47855215e9f9a81ed9a31f5e73478cd23463ae8391753d3b15909daf03e2bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628634815170944830,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lmhw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a10306d-93c9-4aac-b47a-8bd1d
406882c,},Annotations:map[string]string{io.kubernetes.container.hash: a3c465e1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b071c54f171f451e6e33cdbfbddab63c53133700bfe5dbff374f9d207f00facf,PodSandboxId:8370bc0b5b9a970d4007204b3b67a7266d452619c4263328f4a8403fa748ea90,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628634790862838159,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18b06801ebf2048d768b73e098da
8a40,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d8b11c78d38735ea0d95a1ef9a0812af3d2ecb679a807514125823d90f0f456,PodSandboxId:76613b5ae865f77eb677f7fc6c1d96c89c385010f19742b2a2daaf3b3d536da2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628634790776364691,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 77761625d867cf54e5130d9def04b55c,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b22cef1088cdfa5b9c5d2f6974fa60c0edf2132c0881c9c61984c16c2864f97,PodSandboxId:2d4522aed7a375e49e9a8b17d6da385d86a65fcbac37cb6f75c688d6842598cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628634790563156773,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
099813bef5425d688516ac434247f4d,},Annotations:map[string]string{io.kubernetes.container.hash: 96b9c909,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d359668a0c2391fdfb2cd9dcf67ffaf1c30ce8549c23d047c95ef4a75f2ada,PodSandboxId:42a0fc97ea8eb49918d4f19e7d3b3837f13d12aa3c1b7f8c191d955c9ae417e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628634790483989854,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee4e4232c1192224bf90edfa1030cde5,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 755f249c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=97da63bb-ac3f-46d0-8384-bf02e5b03f99 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:37:41 multinode-20210810223223-30291 crio[2074]: time="2021-08-10 22:37:41.032829770Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=adc18b82-b35b-4fa2-a099-75908fafcae2 name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Aug 10 22:37:41 multinode-20210810223223-30291 crio[2074]: time="2021-08-10 22:37:41.034113294Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:77754eaffbc8999d0eee55167b99849548aeceee68e42223821dde6df9b71f5f,Metadata:&PodSandboxMetadata{Name:busybox-84b6686758-5h7gq,Uid:d11e1a39-be6d-4d16-9086-b6cfef5e1644,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1628634873698805971,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-84b6686758-5h7gq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d11e1a39-be6d-4d16-9086-b6cfef5e1644,pod-template-hash: 84b6686758,},Annotations:map[string]string{kubernetes.io/config.seen: 2021-08-10T22:34:31.844571684Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:552c60b72eff7ec71d413674f557175d18290855b2fbb557210d2ac15208a685,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:af946d1d-fa19-47fa-8c83-fd1d06a0e788,Namespace:kube-system
,Attempt:0,},State:SANDBOX_READY,CreatedAt:1628634816888082643,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af946d1d-fa19-47fa-8c83-fd1d06a0e788,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\
":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2021-08-10T22:33:36.208400838Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fbff0e39503c70d012b363b8fe670ce8a9f3577e7e15e998600a2fa8b6670222,Metadata:&PodSandboxMetadata{Name:coredns-558bd4d5db-v7x6p,Uid:0c4eb44b-9d97-4934-aa16-8b8625bf04cf,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1628634815923185302,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-558bd4d5db-v7x6p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c4eb44b-9d97-4934-aa16-8b8625bf04cf,k8s-app: kube-dns,pod-template-hash: 558bd4d5db,},Annotations:map[string]string{kubernetes.io/config.seen: 2021-08-10T22:33:34.035055859Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2b47855215e9f9a81ed9a31f5e73478cd23463ae8391753d3b15909daf03e2bf,Metadata:&PodSandboxMetadata{Name:kube-proxy-lmhw9,Uid:2a10306d-93c9-4aac-b47a-8bd1d406882c,Namespace
:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1628634814301230961,Labels:map[string]string{controller-revision-hash: 7cdcb64568,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-lmhw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a10306d-93c9-4aac-b47a-8bd1d406882c,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2021-08-10T22:33:33.876387957Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f2a2f0197075a11d02804fe88085db4f127993180baa17934f4f244fbc5c7a8c,Metadata:&PodSandboxMetadata{Name:kindnet-2bvdc,Uid:c26b9021-1d86-475c-ac98-6f7e7e07c434,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1628634814261575992,Labels:map[string]string{app: kindnet,controller-revision-hash: 694b6fb659,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-2bvdc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c26b9021-1d86-475c-ac98-6f7e7e07c434,k8s-app: kindnet,pod
-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2021-08-10T22:33:33.859000126Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:42a0fc97ea8eb49918d4f19e7d3b3837f13d12aa3c1b7f8c191d955c9ae417e3,Metadata:&PodSandboxMetadata{Name:etcd-multinode-20210810223223-30291,Uid:ee4e4232c1192224bf90edfa1030cde5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1628634789523165030,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee4e4232c1192224bf90edfa1030cde5,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.32:2379,kubernetes.io/config.hash: ee4e4232c1192224bf90edfa1030cde5,kubernetes.io/config.seen: 2021-08-10T22:33:07.454065362Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2d4522aed7a375e49e9a8b17d6d
a385d86a65fcbac37cb6f75c688d6842598cd,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-20210810223223-30291,Uid:9099813bef5425d688516ac434247f4d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1628634789519001825,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9099813bef5425d688516ac434247f4d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.32:8443,kubernetes.io/config.hash: 9099813bef5425d688516ac434247f4d,kubernetes.io/config.seen: 2021-08-10T22:33:07.454085484Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8370bc0b5b9a970d4007204b3b67a7266d452619c4263328f4a8403fa748ea90,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-20210810223223-30291,Uid:18b06801ebf2048d768b73e098da8a40,Namespace:kube-system,Attemp
t:0,},State:SANDBOX_READY,CreatedAt:1628634789502775214,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18b06801ebf2048d768b73e098da8a40,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 18b06801ebf2048d768b73e098da8a40,kubernetes.io/config.seen: 2021-08-10T22:33:07.454089904Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:76613b5ae865f77eb677f7fc6c1d96c89c385010f19742b2a2daaf3b3d536da2,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-20210810223223-30291,Uid:77761625d867cf54e5130d9def04b55c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1628634789482979257,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-20210810223223-30291,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 77761625d867cf54e5130d9def04b55c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 77761625d867cf54e5130d9def04b55c,kubernetes.io/config.seen: 2021-08-10T22:33:07.454087868Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=adc18b82-b35b-4fa2-a099-75908fafcae2 name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Aug 10 22:37:41 multinode-20210810223223-30291 crio[2074]: time="2021-08-10 22:37:41.035851379Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=26e66b87-386f-4034-9f08-df0972e1b34b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:37:41 multinode-20210810223223-30291 crio[2074]: time="2021-08-10 22:37:41.035977792Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=26e66b87-386f-4034-9f08-df0972e1b34b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:37:41 multinode-20210810223223-30291 crio[2074]: time="2021-08-10 22:37:41.036148109Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0546f83cfad21f66789e8c6c45beb48deb7e73bfb0f09ffb76be32add922c472,PodSandboxId:77754eaffbc8999d0eee55167b99849548aeceee68e42223821dde6df9b71f5f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,State:CONTAINER_RUNNING,CreatedAt:1628634877804702101,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-84b6686758-5h7gq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d11e1a39-be6d-4d16-9086-b6cfef5e1644,},Annotations:map[string]string{io.kubernetes.container.hash: c6d8bc46,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f90af0c728851e601b452c99af417f5bfcd580b5355c6e8bb56fa2c701aacb7,PodSandboxId:f2a2f0197075a11d02804fe88085db4f127993180baa17934f4f244fbc5c7a8c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1,State:CONTAINER_RUNNING,CreatedAt:1628634817927932495,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2bvdc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c26b9021-1d86-475c-ac98-6f7e7e07c434,},Annotations:map[string]string{io.kubernetes.container.hash: 4970c570,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfebebba90bf30cb1d511f33375a3e9b6bd91ddc07cabfe5b5589a14fe35685c,PodSandboxId:552c60b72eff7ec71d413674f557175d18290855b2fbb557210d2ac15208a685,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628634817667541294,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af946d1d-fa19-47fa-8c83-fd1d06a0e788,},Annotations:map[string]string{io.kubernetes.container.hash: 31e4f950,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea6ca1aa48dc2cf026c5ed8947bfb8ef90ac7ac3d0f88bee1bc2b22511e83b4,PodSandboxId:fbff0e39503c70d012b363b8fe670ce8a9f3577e7e15e998600a2fa8b6670222,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628634816892860051,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-v7x6p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c4eb44b-9d97-4934-aa16-8b8625bf04cf,},Annotations:map[string]string{io.kubernetes.container.hash: e2b5faf8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dn
s-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:450a88f25c78b26558e59b4e708a2af565fa95ffdc04a9cb0e91b46224c6573d,PodSandboxId:2b47855215e9f9a81ed9a31f5e73478cd23463ae8391753d3b15909daf03e2bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628634815170944830,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lmhw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a10306d-93c9-4aac-b47a-8bd1d
406882c,},Annotations:map[string]string{io.kubernetes.container.hash: a3c465e1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b071c54f171f451e6e33cdbfbddab63c53133700bfe5dbff374f9d207f00facf,PodSandboxId:8370bc0b5b9a970d4007204b3b67a7266d452619c4263328f4a8403fa748ea90,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628634790862838159,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18b06801ebf2048d768b73e098da
8a40,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d8b11c78d38735ea0d95a1ef9a0812af3d2ecb679a807514125823d90f0f456,PodSandboxId:76613b5ae865f77eb677f7fc6c1d96c89c385010f19742b2a2daaf3b3d536da2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628634790776364691,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 77761625d867cf54e5130d9def04b55c,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b22cef1088cdfa5b9c5d2f6974fa60c0edf2132c0881c9c61984c16c2864f97,PodSandboxId:2d4522aed7a375e49e9a8b17d6da385d86a65fcbac37cb6f75c688d6842598cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628634790563156773,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
099813bef5425d688516ac434247f4d,},Annotations:map[string]string{io.kubernetes.container.hash: 96b9c909,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d359668a0c2391fdfb2cd9dcf67ffaf1c30ce8549c23d047c95ef4a75f2ada,PodSandboxId:42a0fc97ea8eb49918d4f19e7d3b3837f13d12aa3c1b7f8c191d955c9ae417e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628634790483989854,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee4e4232c1192224bf90edfa1030cde5,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 755f249c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=26e66b87-386f-4034-9f08-df0972e1b34b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:37:41 multinode-20210810223223-30291 crio[2074]: time="2021-08-10 22:37:41.058644541Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f69dede4-f905-486b-aebb-2e1f93d57d53 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:37:41 multinode-20210810223223-30291 crio[2074]: time="2021-08-10 22:37:41.058741279Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f69dede4-f905-486b-aebb-2e1f93d57d53 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:37:41 multinode-20210810223223-30291 crio[2074]: time="2021-08-10 22:37:41.059212896Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0546f83cfad21f66789e8c6c45beb48deb7e73bfb0f09ffb76be32add922c472,PodSandboxId:77754eaffbc8999d0eee55167b99849548aeceee68e42223821dde6df9b71f5f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,State:CONTAINER_RUNNING,CreatedAt:1628634877804702101,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-84b6686758-5h7gq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d11e1a39-be6d-4d16-9086-b6cfef5e1644,},Annotations:map[string]string{io.kubernetes.container.hash: c6d8bc46,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f90af0c728851e601b452c99af417f5bfcd580b5355c6e8bb56fa2c701aacb7,PodSandboxId:f2a2f0197075a11d02804fe88085db4f127993180baa17934f4f244fbc5c7a8c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1,State:CONTAINER_RUNNING,CreatedAt:1628634817927932495,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2bvdc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c26b9021-1d86-475c-ac98-6f7e7e07c434,},Annotations:map[string]string{io.kubernetes.container.hash: 4970c570,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfebebba90bf30cb1d511f33375a3e9b6bd91ddc07cabfe5b5589a14fe35685c,PodSandboxId:552c60b72eff7ec71d413674f557175d18290855b2fbb557210d2ac15208a685,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628634817667541294,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af946d1d-fa19-47fa-8c83-fd1d06a0e788,},Annotations:map[string]string{io.kubernetes.container.hash: 31e4f950,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea6ca1aa48dc2cf026c5ed8947bfb8ef90ac7ac3d0f88bee1bc2b22511e83b4,PodSandboxId:fbff0e39503c70d012b363b8fe670ce8a9f3577e7e15e998600a2fa8b6670222,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628634816892860051,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-v7x6p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c4eb44b-9d97-4934-aa16-8b8625bf04cf,},Annotations:map[string]string{io.kubernetes.container.hash: e2b5faf8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dn
s-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:450a88f25c78b26558e59b4e708a2af565fa95ffdc04a9cb0e91b46224c6573d,PodSandboxId:2b47855215e9f9a81ed9a31f5e73478cd23463ae8391753d3b15909daf03e2bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628634815170944830,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lmhw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a10306d-93c9-4aac-b47a-8bd1d
406882c,},Annotations:map[string]string{io.kubernetes.container.hash: a3c465e1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b071c54f171f451e6e33cdbfbddab63c53133700bfe5dbff374f9d207f00facf,PodSandboxId:8370bc0b5b9a970d4007204b3b67a7266d452619c4263328f4a8403fa748ea90,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628634790862838159,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18b06801ebf2048d768b73e098da
8a40,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d8b11c78d38735ea0d95a1ef9a0812af3d2ecb679a807514125823d90f0f456,PodSandboxId:76613b5ae865f77eb677f7fc6c1d96c89c385010f19742b2a2daaf3b3d536da2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628634790776364691,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 77761625d867cf54e5130d9def04b55c,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b22cef1088cdfa5b9c5d2f6974fa60c0edf2132c0881c9c61984c16c2864f97,PodSandboxId:2d4522aed7a375e49e9a8b17d6da385d86a65fcbac37cb6f75c688d6842598cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628634790563156773,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
099813bef5425d688516ac434247f4d,},Annotations:map[string]string{io.kubernetes.container.hash: 96b9c909,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d359668a0c2391fdfb2cd9dcf67ffaf1c30ce8549c23d047c95ef4a75f2ada,PodSandboxId:42a0fc97ea8eb49918d4f19e7d3b3837f13d12aa3c1b7f8c191d955c9ae417e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628634790483989854,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee4e4232c1192224bf90edfa1030cde5,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 755f249c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f69dede4-f905-486b-aebb-2e1f93d57d53 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:37:41 multinode-20210810223223-30291 crio[2074]: time="2021-08-10 22:37:41.099170732Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=68ece088-5764-440c-b16f-1c5f298db714 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:37:41 multinode-20210810223223-30291 crio[2074]: time="2021-08-10 22:37:41.099221861Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=68ece088-5764-440c-b16f-1c5f298db714 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:37:41 multinode-20210810223223-30291 crio[2074]: time="2021-08-10 22:37:41.099404317Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0546f83cfad21f66789e8c6c45beb48deb7e73bfb0f09ffb76be32add922c472,PodSandboxId:77754eaffbc8999d0eee55167b99849548aeceee68e42223821dde6df9b71f5f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,State:CONTAINER_RUNNING,CreatedAt:1628634877804702101,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-84b6686758-5h7gq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d11e1a39-be6d-4d16-9086-b6cfef5e1644,},Annotations:map[string]string{io.kubernetes.container.hash: c6d8bc46,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f90af0c728851e601b452c99af417f5bfcd580b5355c6e8bb56fa2c701aacb7,PodSandboxId:f2a2f0197075a11d02804fe88085db4f127993180baa17934f4f244fbc5c7a8c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1,State:CONTAINER_RUNNING,CreatedAt:1628634817927932495,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2bvdc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c26b9021-1d86-475c-ac98-6f7e7e07c434,},Annotations:map[string]string{io.kubernetes.container.hash: 4970c570,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfebebba90bf30cb1d511f33375a3e9b6bd91ddc07cabfe5b5589a14fe35685c,PodSandboxId:552c60b72eff7ec71d413674f557175d18290855b2fbb557210d2ac15208a685,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628634817667541294,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af946d1d-fa19-47fa-8c83-fd1d06a0e788,},Annotations:map[string]string{io.kubernetes.container.hash: 31e4f950,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea6ca1aa48dc2cf026c5ed8947bfb8ef90ac7ac3d0f88bee1bc2b22511e83b4,PodSandboxId:fbff0e39503c70d012b363b8fe670ce8a9f3577e7e15e998600a2fa8b6670222,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628634816892860051,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-v7x6p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c4eb44b-9d97-4934-aa16-8b8625bf04cf,},Annotations:map[string]string{io.kubernetes.container.hash: e2b5faf8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dn
s-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:450a88f25c78b26558e59b4e708a2af565fa95ffdc04a9cb0e91b46224c6573d,PodSandboxId:2b47855215e9f9a81ed9a31f5e73478cd23463ae8391753d3b15909daf03e2bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628634815170944830,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lmhw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a10306d-93c9-4aac-b47a-8bd1d
406882c,},Annotations:map[string]string{io.kubernetes.container.hash: a3c465e1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b071c54f171f451e6e33cdbfbddab63c53133700bfe5dbff374f9d207f00facf,PodSandboxId:8370bc0b5b9a970d4007204b3b67a7266d452619c4263328f4a8403fa748ea90,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628634790862838159,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18b06801ebf2048d768b73e098da
8a40,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d8b11c78d38735ea0d95a1ef9a0812af3d2ecb679a807514125823d90f0f456,PodSandboxId:76613b5ae865f77eb677f7fc6c1d96c89c385010f19742b2a2daaf3b3d536da2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628634790776364691,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 77761625d867cf54e5130d9def04b55c,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b22cef1088cdfa5b9c5d2f6974fa60c0edf2132c0881c9c61984c16c2864f97,PodSandboxId:2d4522aed7a375e49e9a8b17d6da385d86a65fcbac37cb6f75c688d6842598cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628634790563156773,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
099813bef5425d688516ac434247f4d,},Annotations:map[string]string{io.kubernetes.container.hash: 96b9c909,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d359668a0c2391fdfb2cd9dcf67ffaf1c30ce8549c23d047c95ef4a75f2ada,PodSandboxId:42a0fc97ea8eb49918d4f19e7d3b3837f13d12aa3c1b7f8c191d955c9ae417e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628634790483989854,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee4e4232c1192224bf90edfa1030cde5,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 755f249c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=68ece088-5764-440c-b16f-1c5f298db714 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                               CREATED             STATE               NAME                      ATTEMPT             POD ID
	0546f83cfad21       docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47   3 minutes ago       Running             busybox                   0                   77754eaffbc89
	2f90af0c72885       6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb                                    4 minutes ago       Running             kindnet-cni               0                   f2a2f0197075a
	bfebebba90bf3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                    4 minutes ago       Running             storage-provisioner       0                   552c60b72eff7
	0ea6ca1aa48dc       296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899                                    4 minutes ago       Running             coredns                   0                   fbff0e39503c7
	450a88f25c78b       adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92                                    4 minutes ago       Running             kube-proxy                0                   2b47855215e9f
	b071c54f171f4       6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a                                    4 minutes ago       Running             kube-scheduler            0                   8370bc0b5b9a9
	9d8b11c78d387       bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9                                    4 minutes ago       Running             kube-controller-manager   0                   76613b5ae865f
	3b22cef1088cd       3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80                                    4 minutes ago       Running             kube-apiserver            0                   2d4522aed7a37
	a9d359668a0c2       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934                                    4 minutes ago       Running             etcd                      0                   42a0fc97ea8eb
	
	* 
	* ==> coredns [0ea6ca1aa48dc2cf026c5ed8947bfb8ef90ac7ac3d0f88bee1bc2b22511e83b4] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = 7ae91e86dd75dee9ae501cb58003198b
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-20210810223223-30291
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-20210810223223-30291
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=877a5691753f15214a0c269ac69dcdc5a4d99fcd
	                    minikube.k8s.io/name=multinode-20210810223223-30291
	                    minikube.k8s.io/updated_at=2021_08_10T22_33_20_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Aug 2021 22:33:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-20210810223223-30291
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Aug 2021 22:37:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Aug 2021 22:34:55 +0000   Tue, 10 Aug 2021 22:33:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Aug 2021 22:34:55 +0000   Tue, 10 Aug 2021 22:33:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Aug 2021 22:34:55 +0000   Tue, 10 Aug 2021 22:33:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Aug 2021 22:34:55 +0000   Tue, 10 Aug 2021 22:33:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.32
	  Hostname:    multinode-20210810223223-30291
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2186320Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2186320Ki
	  pods:               110
	System Info:
	  Machine ID:                 de21d03631a8455fad3ba3176e019295
	  System UUID:                de21d036-31a8-455f-ad3b-a3176e019295
	  Boot ID:                    bec9016f-f72c-4c6f-b82e-0ecc285f4ce2
	  Kernel Version:             4.19.182
	  OS Image:                   Buildroot 2020.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.20.2
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                      ------------  ----------  ---------------  -------------  ---
	  default                     busybox-84b6686758-5h7gq                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m10s
	  kube-system                 coredns-558bd4d5db-v7x6p                                  100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (7%!)(MISSING)     4m8s
	  kube-system                 etcd-multinode-20210810223223-30291                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m16s
	  kube-system                 kindnet-2bvdc                                             100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m8s
	  kube-system                 kube-apiserver-multinode-20210810223223-30291             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m24s
	  kube-system                 kube-controller-manager-multinode-20210810223223-30291    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	  kube-system                 kube-proxy-lmhw9                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	  kube-system                 kube-scheduler-multinode-20210810223223-30291             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	  kube-system                 storage-provisioner                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From        Message
	  ----    ------                   ----   ----        -------
	  Normal  Starting                 4m16s  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m16s  kubelet     Node multinode-20210810223223-30291 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m16s  kubelet     Node multinode-20210810223223-30291 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m16s  kubelet     Node multinode-20210810223223-30291 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m16s  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m8s   kubelet     Node multinode-20210810223223-30291 status is now: NodeReady
	  Normal  Starting                 4m6s   kube-proxy  Starting kube-proxy.
	
	
	Name:               multinode-20210810223223-30291-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-20210810223223-30291-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Aug 2021 22:34:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-20210810223223-30291-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Aug 2021 22:37:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Aug 2021 22:34:49 +0000   Tue, 10 Aug 2021 22:34:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Aug 2021 22:34:49 +0000   Tue, 10 Aug 2021 22:34:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Aug 2021 22:34:49 +0000   Tue, 10 Aug 2021 22:34:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Aug 2021 22:34:49 +0000   Tue, 10 Aug 2021 22:34:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.251
	  Hostname:    multinode-20210810223223-30291-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2186496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2186496Ki
	  pods:               110
	System Info:
	  Machine ID:                 005771fefc2349659960c7a87d3f4dae
	  System UUID:                005771fe-fc23-4965-9960-c7a87d3f4dae
	  Boot ID:                    1c43cbb6-d195-42d9-894f-bf1b95ff036b
	  Kernel Version:             4.19.182
	  OS Image:                   Buildroot 2020.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.20.2
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-84b6686758-nfzzk    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m10s
	  kube-system                 kindnet-frf82               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m22s
	  kube-system                 kube-proxy-6t6mb            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 3m22s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m22s (x2 over 3m22s)  kubelet     Node multinode-20210810223223-30291-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m22s (x2 over 3m22s)  kubelet     Node multinode-20210810223223-30291-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m22s (x2 over 3m22s)  kubelet     Node multinode-20210810223223-30291-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m22s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 3m19s                  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m12s                  kubelet     Node multinode-20210810223223-30291-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Aug10 22:32] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.088919] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +3.694860] Unstable clock detected, switching default tracing clock to "global"
	              If you want to keep using the local clock, then add:
	                "trace_clock=local"
	              on the kernel command line
	[  +0.000017] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.251584] systemd-fstab-generator[1161]: Ignoring "noauto" for root device
	[  +0.038227] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +0.961352] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1727 comm=systemd-network
	[  +1.254958] vboxguest: loading out-of-tree module taints kernel.
	[  +0.006174] vboxguest: PCI device not found, probably running on physical hardware.
	[  +1.709643] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[ +14.578437] systemd-fstab-generator[2161]: Ignoring "noauto" for root device
	[  +0.147253] systemd-fstab-generator[2174]: Ignoring "noauto" for root device
	[  +0.197594] systemd-fstab-generator[2201]: Ignoring "noauto" for root device
	[Aug10 22:33] systemd-fstab-generator[2404]: Ignoring "noauto" for root device
	[ +17.433952] systemd-fstab-generator[2814]: Ignoring "noauto" for root device
	[ +16.294097] kauditd_printk_skb: 38 callbacks suppressed
	[Aug10 22:34] NFSD: Unable to end grace period: -110
	
	* 
	* ==> etcd [a9d359668a0c2391fdfb2cd9dcf67ffaf1c30ce8549c23d047c95ef4a75f2ada] <==
	* 2021-08-10 22:34:12.398752 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:1128" took too long (243.986575ms) to execute
	2021-08-10 22:34:12.399124 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.957229361s) to execute
	2021-08-10 22:34:12.399326 W | etcdserver: read-only range request "key:\"/registry/secrets/kube-system/bootstrap-token-gytvgv\" " with result "range_response_count:0 size:5" took too long (1.51025777s) to execute
	2021-08-10 22:34:12.399813 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.380960986s) to execute
	2021-08-10 22:34:12.464383 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:34:22.464807 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:34:32.464604 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:34:42.464793 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:34:52.464706 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:35:02.465202 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:35:12.464211 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:35:22.464958 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:35:32.465006 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:35:42.465044 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:35:52.465381 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:36:02.464159 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:36:12.464138 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:36:22.464574 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:36:32.464335 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:36:42.464340 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:36:52.464228 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:37:02.463816 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:37:12.464274 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:37:22.466118 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:37:32.464849 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  22:37:41 up 5 min,  0 users,  load average: 0.36, 0.45, 0.22
	Linux multinode-20210810223223-30291 4.19.182 #1 SMP Fri Aug 6 09:11:32 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [3b22cef1088cdfa5b9c5d2f6974fa60c0edf2132c0881c9c61984c16c2864f97] <==
	* Trace[607947834]: ---"Listing from storage done" 1371ms (22:34:00.110)
	Trace[607947834]: [1.374793322s] [1.374793322s] END
	I0810 22:34:10.115315       1 trace.go:205] Trace[1180611157]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (10-Aug-2021 22:34:08.005) (total time: 2109ms):
	Trace[1180611157]: ---"Transaction committed" 2107ms (22:34:00.115)
	Trace[1180611157]: [2.109328429s] [2.109328429s] END
	I0810 22:34:12.400291       1 trace.go:205] Trace[1738354548]: "Get" url:/api/v1/namespaces/kube-system/secrets/bootstrap-token-gytvgv,user-agent:kubeadm/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.50.32,accept:application/json, */*,protocol:HTTP/2.0 (10-Aug-2021 22:34:10.887) (total time: 1512ms):
	Trace[1738354548]: [1.512270936s] [1.512270936s] END
	I0810 22:34:28.512071       1 client.go:360] parsed scheme: "passthrough"
	I0810 22:34:28.512199       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0810 22:34:28.512250       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0810 22:34:59.758132       1 client.go:360] parsed scheme: "passthrough"
	I0810 22:34:59.758266       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0810 22:34:59.758290       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0810 22:35:40.221354       1 client.go:360] parsed scheme: "passthrough"
	I0810 22:35:40.221745       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0810 22:35:40.221798       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0810 22:36:13.885333       1 client.go:360] parsed scheme: "passthrough"
	I0810 22:36:13.885714       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0810 22:36:13.885756       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0810 22:36:49.812970       1 client.go:360] parsed scheme: "passthrough"
	I0810 22:36:49.813006       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0810 22:36:49.813017       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0810 22:37:25.388022       1 client.go:360] parsed scheme: "passthrough"
	I0810 22:37:25.388108       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0810 22:37:25.388135       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [9d8b11c78d38735ea0d95a1ef9a0812af3d2ecb679a807514125823d90f0f456] <==
	* I0810 22:33:33.780317       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-558bd4d5db to 2"
	I0810 22:33:33.807397       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0810 22:33:33.844874       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-lmhw9"
	I0810 22:33:33.852188       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-2bvdc"
	E0810 22:33:33.959456       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"3eb0224f-214a-4d5e-ba63-b7b722448d21", ResourceVersion:"270", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764231599, loc:(*time.Location)(0x72ff440)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc00174af48), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00174af60)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.
LabelSelector)(0xc0005e5f80), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Gl
usterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc00183ee00), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00174a
f78), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeS
ource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00174af90), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil),
AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.21.3", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0005e5fc0)}}, Resources:v1.R
esourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001110cc0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPo
licy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00195fa78), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0004844d0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runtime
ClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0002b8d10)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00195fac8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	E0810 22:33:34.033293       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"a503f3df-695b-4295-a3fb-be1b75ae37c5", ResourceVersion:"419", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764231600, loc:(*time.Location)(0x72ff440)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"kindest/kindnetd:v20210326-1e038dc5\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists
\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0021b2f90), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0021b2fa8)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0021b2fc0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0021b2fd8)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0021a1000), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, Creat
ionTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0021b2ff0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexV
olumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0021b3008), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVol
umeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSI
VolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0021b3020), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v
1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"kindest/kindnetd:v20210326-1e038dc5", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0021a1020)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0021a1060)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amoun
t{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropag
ation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc00218ff20), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0021b15b8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000aaefc0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil
), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0021c59c0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0021b1600)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition
(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0810 22:33:34.038041       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-v7x6p"
	I0810 22:33:34.059269       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-gjhgp"
	E0810 22:33:34.077923       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"3eb0224f-214a-4d5e-ba63-b7b722448d21", ResourceVersion:"418", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764231599, loc:(*time.Location)(0x72ff440)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0021b2de0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0021b2df8)}, v1.ManagedFieldsEntry{Manager:"kube-co
ntroller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0021b2e10), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0021b2e28)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0021a0ee0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElastic
BlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0021cad00), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSour
ce)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0021b2e40), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSo
urce)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0021b2e58), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil),
Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.21.3", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil),
WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0021a0f20)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"F
ile", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc00218fda0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0021b1248), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000aaee00), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)
(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0021c5810)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0021b1298)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest ve
rsion and try again
	I0810 22:33:34.139230       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-gjhgp"
	W0810 22:34:19.160323       1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-20210810223223-30291-m02" does not exist
	I0810 22:34:19.213872       1 range_allocator.go:373] Set node multinode-20210810223223-30291-m02 PodCIDR to [10.244.1.0/24]
	I0810 22:34:19.229592       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-6t6mb"
	I0810 22:34:19.231191       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-frf82"
	E0810 22:34:19.348447       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"3eb0224f-214a-4d5e-ba63-b7b722448d21", ResourceVersion:"546", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764231599, loc:(*time.Location)(0x72ff440)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001fa7680), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001fa7698)}, v1.ManagedFieldsEntry{Manager:"kube-co
ntroller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001fa76b0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001fa76f8)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0015ce980), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElastic
BlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc001f1a3c0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSour
ce)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001fa7710), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSo
urce)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001fa7728), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil),
Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.21.3", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil),
WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0015ceb00)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"F
ile", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001e67800), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001f26338), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000239570), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)
(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0020e6730)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001f26388)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:1, NumberMisscheduled:0, DesiredNumberScheduled:2, NumberReady:1, ObservedGeneration:1, UpdatedNumberScheduled:1, NumberAvailable:1, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest ve
rsion and try again
	W0810 22:34:23.229022       1 node_lifecycle_controller.go:1013] Missing timestamp for Node multinode-20210810223223-30291-m02. Assuming now as a timestamp.
	I0810 22:34:23.229242       1 event.go:291] "Event occurred" object="multinode-20210810223223-30291-m02" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-20210810223223-30291-m02 event: Registered Node multinode-20210810223223-30291-m02 in Controller"
	I0810 22:34:31.756639       1 event.go:291] "Event occurred" object="default/busybox" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-84b6686758 to 2"
	I0810 22:34:31.775807       1 event.go:291] "Event occurred" object="default/busybox-84b6686758" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-84b6686758-nfzzk"
	I0810 22:34:31.786266       1 event.go:291] "Event occurred" object="default/busybox-84b6686758" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-84b6686758-5h7gq"
	I0810 22:34:33.244427       1 event.go:291] "Event occurred" object="default/busybox-84b6686758-nfzzk" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-84b6686758-nfzzk"
	
	* 
	* ==> kube-proxy [450a88f25c78b26558e59b4e708a2af565fa95ffdc04a9cb0e91b46224c6573d] <==
	* I0810 22:33:35.569628       1 node.go:172] Successfully retrieved node IP: 192.168.50.32
	I0810 22:33:35.569813       1 server_others.go:140] Detected node IP 192.168.50.32
	W0810 22:33:35.569841       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	W0810 22:33:35.654806       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0810 22:33:35.654827       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0810 22:33:35.654841       1 server_others.go:212] Using iptables Proxier.
	I0810 22:33:35.655734       1 server.go:643] Version: v1.21.3
	I0810 22:33:35.658259       1 config.go:315] Starting service config controller
	I0810 22:33:35.658621       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0810 22:33:35.658673       1 config.go:224] Starting endpoint slice config controller
	I0810 22:33:35.658678       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0810 22:33:35.676217       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0810 22:33:35.685273       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0810 22:33:35.759211       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0810 22:33:35.759304       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [b071c54f171f451e6e33cdbfbddab63c53133700bfe5dbff374f9d207f00facf] <==
	* E0810 22:33:15.981752       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0810 22:33:15.981856       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0810 22:33:15.981938       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0810 22:33:15.982018       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0810 22:33:15.982342       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0810 22:33:15.982446       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0810 22:33:15.983840       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0810 22:33:15.987768       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0810 22:33:15.988341       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0810 22:33:15.988857       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0810 22:33:16.802264       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0810 22:33:16.819246       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0810 22:33:16.973719       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0810 22:33:17.040697       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0810 22:33:17.045349       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0810 22:33:17.064578       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0810 22:33:17.193884       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0810 22:33:17.273017       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0810 22:33:17.289233       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0810 22:33:17.372978       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0810 22:33:17.401904       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0810 22:33:17.428102       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0810 22:33:17.471670       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0810 22:33:17.563926       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0810 22:33:19.977395       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2021-08-10 22:32:34 UTC, end at Tue 2021-08-10 22:37:41 UTC. --
	Aug 10 22:33:33 multinode-20210810223223-30291 kubelet[2823]: I0810 22:33:33.910283    2823 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2a10306d-93c9-4aac-b47a-8bd1d406882c-kube-proxy\") pod \"kube-proxy-lmhw9\" (UID: \"2a10306d-93c9-4aac-b47a-8bd1d406882c\") "
	Aug 10 22:33:33 multinode-20210810223223-30291 kubelet[2823]: I0810 22:33:33.910302    2823 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2a10306d-93c9-4aac-b47a-8bd1d406882c-lib-modules\") pod \"kube-proxy-lmhw9\" (UID: \"2a10306d-93c9-4aac-b47a-8bd1d406882c\") "
	Aug 10 22:33:33 multinode-20210810223223-30291 kubelet[2823]: I0810 22:33:33.910321    2823 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c26b9021-1d86-475c-ac98-6f7e7e07c434-cni-cfg\") pod \"kindnet-2bvdc\" (UID: \"c26b9021-1d86-475c-ac98-6f7e7e07c434\") "
	Aug 10 22:33:33 multinode-20210810223223-30291 kubelet[2823]: I0810 22:33:33.910339    2823 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c26b9021-1d86-475c-ac98-6f7e7e07c434-lib-modules\") pod \"kindnet-2bvdc\" (UID: \"c26b9021-1d86-475c-ac98-6f7e7e07c434\") "
	Aug 10 22:33:33 multinode-20210810223223-30291 kubelet[2823]: I0810 22:33:33.910418    2823 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctbl8\" (UniqueName: \"kubernetes.io/projected/2a10306d-93c9-4aac-b47a-8bd1d406882c-kube-api-access-ctbl8\") pod \"kube-proxy-lmhw9\" (UID: \"2a10306d-93c9-4aac-b47a-8bd1d406882c\") "
	Aug 10 22:33:34 multinode-20210810223223-30291 kubelet[2823]: I0810 22:33:34.035403    2823 topology_manager.go:187] "Topology Admit Handler"
	Aug 10 22:33:34 multinode-20210810223223-30291 kubelet[2823]: E0810 22:33:34.041387    2823 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:multinode-20210810223223-30291" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'multinode-20210810223223-30291' and this object
	Aug 10 22:33:34 multinode-20210810223223-30291 kubelet[2823]: I0810 22:33:34.092432    2823 topology_manager.go:187] "Topology Admit Handler"
	Aug 10 22:33:34 multinode-20210810223223-30291 kubelet[2823]: I0810 22:33:34.117762    2823 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c4eb44b-9d97-4934-aa16-8b8625bf04cf-config-volume\") pod \"coredns-558bd4d5db-v7x6p\" (UID: \"0c4eb44b-9d97-4934-aa16-8b8625bf04cf\") "
	Aug 10 22:33:34 multinode-20210810223223-30291 kubelet[2823]: I0810 22:33:34.117795    2823 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnwhk\" (UniqueName: \"kubernetes.io/projected/0c4eb44b-9d97-4934-aa16-8b8625bf04cf-kube-api-access-vnwhk\") pod \"coredns-558bd4d5db-v7x6p\" (UID: \"0c4eb44b-9d97-4934-aa16-8b8625bf04cf\") "
	Aug 10 22:33:35 multinode-20210810223223-30291 kubelet[2823]: E0810 22:33:35.219307    2823 configmap.go:200] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	Aug 10 22:33:35 multinode-20210810223223-30291 kubelet[2823]: E0810 22:33:35.219475    2823 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/configmap/0c4eb44b-9d97-4934-aa16-8b8625bf04cf-config-volume podName:0c4eb44b-9d97-4934-aa16-8b8625bf04cf nodeName:}" failed. No retries permitted until 2021-08-10 22:33:35.719411618 +0000 UTC m=+16.076751012 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c4eb44b-9d97-4934-aa16-8b8625bf04cf-config-volume\") pod \"coredns-558bd4d5db-v7x6p\" (UID: \"0c4eb44b-9d97-4934-aa16-8b8625bf04cf\") : failed to sync configmap cache: timed out waiting for the condition"
	Aug 10 22:33:35 multinode-20210810223223-30291 kubelet[2823]: E0810 22:33:35.757470    2823 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/c26b9021-1d86-475c-ac98-6f7e7e07c434/etc-hosts with error exit status 1" pod="kube-system/kindnet-2bvdc"
	Aug 10 22:33:36 multinode-20210810223223-30291 kubelet[2823]: I0810 22:33:36.209114    2823 topology_manager.go:187] "Topology Admit Handler"
	Aug 10 22:33:36 multinode-20210810223223-30291 kubelet[2823]: I0810 22:33:36.257659    2823 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7dtx\" (UniqueName: \"kubernetes.io/projected/af946d1d-fa19-47fa-8c83-fd1d06a0e788-kube-api-access-t7dtx\") pod \"storage-provisioner\" (UID: \"af946d1d-fa19-47fa-8c83-fd1d06a0e788\") "
	Aug 10 22:33:36 multinode-20210810223223-30291 kubelet[2823]: I0810 22:33:36.257794    2823 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/af946d1d-fa19-47fa-8c83-fd1d06a0e788-tmp\") pod \"storage-provisioner\" (UID: \"af946d1d-fa19-47fa-8c83-fd1d06a0e788\") "
	Aug 10 22:34:31 multinode-20210810223223-30291 kubelet[2823]: I0810 22:34:31.845176    2823 topology_manager.go:187] "Topology Admit Handler"
	Aug 10 22:34:31 multinode-20210810223223-30291 kubelet[2823]: E0810 22:34:31.851799    2823 reflector.go:138] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:multinode-20210810223223-30291" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'multinode-20210810223223-30291' and this object
	Aug 10 22:34:32 multinode-20210810223223-30291 kubelet[2823]: I0810 22:34:32.036112    2823 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb4qr\" (UniqueName: \"kubernetes.io/projected/d11e1a39-be6d-4d16-9086-b6cfef5e1644-kube-api-access-gb4qr\") pod \"busybox-84b6686758-5h7gq\" (UID: \"d11e1a39-be6d-4d16-9086-b6cfef5e1644\") "
	Aug 10 22:34:33 multinode-20210810223223-30291 kubelet[2823]: E0810 22:34:33.144805    2823 projected.go:293] Couldn't get configMap default/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Aug 10 22:34:33 multinode-20210810223223-30291 kubelet[2823]: E0810 22:34:33.144936    2823 projected.go:199] Error preparing data for projected volume kube-api-access-gb4qr for pod default/busybox-84b6686758-5h7gq: failed to sync configmap cache: timed out waiting for the condition
	Aug 10 22:34:33 multinode-20210810223223-30291 kubelet[2823]: E0810 22:34:33.145073    2823 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/projected/d11e1a39-be6d-4d16-9086-b6cfef5e1644-kube-api-access-gb4qr podName:d11e1a39-be6d-4d16-9086-b6cfef5e1644 nodeName:}" failed. No retries permitted until 2021-08-10 22:34:33.645034745 +0000 UTC m=+74.002374286 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"kube-api-access-gb4qr\" (UniqueName: \"kubernetes.io/projected/d11e1a39-be6d-4d16-9086-b6cfef5e1644-kube-api-access-gb4qr\") pod \"busybox-84b6686758-5h7gq\" (UID: \"d11e1a39-be6d-4d16-9086-b6cfef5e1644\") : failed to sync configmap cache: timed out waiting for the condition"
	Aug 10 22:34:36 multinode-20210810223223-30291 kubelet[2823]: E0810 22:34:36.475104    2823 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/d11e1a39-be6d-4d16-9086-b6cfef5e1644/etc-hosts with error exit status 1" pod="default/busybox-84b6686758-5h7gq"
	Aug 10 22:35:37 multinode-20210810223223-30291 kubelet[2823]: E0810 22:35:37.134148    2823 kubelet.go:1701] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[config-volume kube-api-access-txw9f], unattached volumes=[config-volume kube-api-access-txw9f]: timed out waiting for the condition" pod="kube-system/coredns-558bd4d5db-gjhgp"
	Aug 10 22:35:37 multinode-20210810223223-30291 kubelet[2823]: E0810 22:35:37.134331    2823 pod_workers.go:190] "Error syncing pod, skipping" err="unmounted volumes=[config-volume kube-api-access-txw9f], unattached volumes=[config-volume kube-api-access-txw9f]: timed out waiting for the condition" pod="kube-system/coredns-558bd4d5db-gjhgp" podUID=122395e4-35b4-4693-843a-15fb7d8031f5
	
	* 
	* ==> storage-provisioner [bfebebba90bf30cb1d511f33375a3e9b6bd91ddc07cabfe5b5589a14fe35685c] <==
	* I0810 22:33:37.814409       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0810 22:33:37.835858       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0810 22:33:37.835985       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0810 22:33:37.854998       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0810 22:33:37.855860       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-20210810223223-30291_afa147a1-4085-48f2-b329-57da5a25f4d5!
	I0810 22:33:37.857788       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4af19e78-1255-4880-9e84-6ce0ffca1e58", APIVersion:"v1", ResourceVersion:"484", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-20210810223223-30291_afa147a1-4085-48f2-b329-57da5a25f4d5 became leader
	I0810 22:33:37.959707       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-20210810223223-30291_afa147a1-4085-48f2-b329-57da5a25f4d5!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-20210810223223-30291 -n multinode-20210810223223-30291
helpers_test.go:262: (dbg) Run:  kubectl --context multinode-20210810223223-30291 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:268: non-running pods: 
helpers_test.go:270: ======> post-mortem[TestMultiNode/serial/DeployApp2Nodes]: describe non-running pods <======
helpers_test.go:273: (dbg) Run:  kubectl --context multinode-20210810223223-30291 describe pod 
helpers_test.go:273: (dbg) Non-zero exit: kubectl --context multinode-20210810223223-30291 describe pod : exit status 1 (50.114458ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:275: kubectl --context multinode-20210810223223-30291 describe pod : exit status 1
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (191.06s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (63.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210810223223-30291 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:529: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210810223223-30291 -- exec busybox-84b6686758-5h7gq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:537: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210810223223-30291 -- exec busybox-84b6686758-5h7gq -- sh -c "ping -c 1 192.168.50.1"
multinode_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-20210810223223-30291 -- exec busybox-84b6686758-5h7gq -- sh -c "ping -c 1 192.168.50.1": exit status 1 (223.158791ms)

                                                
                                                
-- stdout --
	PING 192.168.50.1 (192.168.50.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:538: Failed to ping host (192.168.50.1) from pod (busybox-84b6686758-5h7gq): exit status 1
multinode_test.go:529: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210810223223-30291 -- exec busybox-84b6686758-nfzzk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:529: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-20210810223223-30291 -- exec busybox-84b6686758-nfzzk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": (1m0.313873722s)
multinode_test.go:537: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210810223223-30291 -- exec busybox-84b6686758-nfzzk -- sh -c "ping -c 1 <nil>"
multinode_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-20210810223223-30291 -- exec busybox-84b6686758-nfzzk -- sh -c "ping -c 1 <nil>": exit status 2 (248.456851ms)

                                                
                                                
** stderr ** 
	sh: syntax error: unexpected end of file
	command terminated with exit code 2

                                                
                                                
** /stderr **
multinode_test.go:538: Failed to ping host (<nil>) from pod (busybox-84b6686758-nfzzk): exit status 2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-20210810223223-30291 -n multinode-20210810223223-30291
helpers_test.go:245: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210810223223-30291 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210810223223-30291 logs -n 25: (1.267903813s)
helpers_test.go:253: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |-----------|---------------------------------------------------|----------------------------------------|----------|---------|-------------------------------|-------------------------------|
	|  Command  |                       Args                        |                Profile                 |   User   | Version |          Start Time           |           End Time            |
	|-----------|---------------------------------------------------|----------------------------------------|----------|---------|-------------------------------|-------------------------------|
	| dashboard | --url --port 36195 -p                             | functional-20210810222707-30291        | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:30:11 UTC | Tue, 10 Aug 2021 22:30:17 UTC |
	|           | functional-20210810222707-30291                   |                                        |          |         |                               |                               |
	|           | --alsologtostderr -v=1                            |                                        |          |         |                               |                               |
	| -p        | functional-20210810222707-30291                   | functional-20210810222707-30291        | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:30:19 UTC | Tue, 10 Aug 2021 22:30:19 UTC |
	|           | ssh stat                                          |                                        |          |         |                               |                               |
	|           | /mount-9p/created-by-test                         |                                        |          |         |                               |                               |
	| -p        | functional-20210810222707-30291                   | functional-20210810222707-30291        | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:30:19 UTC | Tue, 10 Aug 2021 22:30:20 UTC |
	|           | ssh stat                                          |                                        |          |         |                               |                               |
	|           | /mount-9p/created-by-pod                          |                                        |          |         |                               |                               |
	| -p        | functional-20210810222707-30291                   | functional-20210810222707-30291        | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:30:20 UTC | Tue, 10 Aug 2021 22:30:20 UTC |
	|           | ssh sudo umount -f /mount-9p                      |                                        |          |         |                               |                               |
	| -p        | functional-20210810222707-30291                   | functional-20210810222707-30291        | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:30:21 UTC | Tue, 10 Aug 2021 22:30:21 UTC |
	|           | ssh findmnt -T /mount-9p | grep                   |                                        |          |         |                               |                               |
	|           | 9p                                                |                                        |          |         |                               |                               |
	| -p        | functional-20210810222707-30291                   | functional-20210810222707-30291        | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:30:21 UTC | Tue, 10 Aug 2021 22:30:21 UTC |
	|           | ssh -- ls -la /mount-9p                           |                                        |          |         |                               |                               |
	| delete    | -p                                                | functional-20210810222707-30291        | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:30:46 UTC | Tue, 10 Aug 2021 22:30:47 UTC |
	|           | functional-20210810222707-30291                   |                                        |          |         |                               |                               |
	| start     | -p                                                | json-output-20210810223047-30291       | testUser | v1.22.0 | Tue, 10 Aug 2021 22:30:47 UTC | Tue, 10 Aug 2021 22:32:12 UTC |
	|           | json-output-20210810223047-30291                  |                                        |          |         |                               |                               |
	|           | --output=json --user=testUser                     |                                        |          |         |                               |                               |
	|           | --memory=2200 --wait=true                         |                                        |          |         |                               |                               |
	|           | --driver=kvm2                                     |                                        |          |         |                               |                               |
	|           | --container-runtime=crio                          |                                        |          |         |                               |                               |
	| pause     | -p                                                | json-output-20210810223047-30291       | testUser | v1.22.0 | Tue, 10 Aug 2021 22:32:12 UTC | Tue, 10 Aug 2021 22:32:13 UTC |
	|           | json-output-20210810223047-30291                  |                                        |          |         |                               |                               |
	|           | --output=json --user=testUser                     |                                        |          |         |                               |                               |
	| unpause   | -p                                                | json-output-20210810223047-30291       | testUser | v1.22.0 | Tue, 10 Aug 2021 22:32:13 UTC | Tue, 10 Aug 2021 22:32:14 UTC |
	|           | json-output-20210810223047-30291                  |                                        |          |         |                               |                               |
	|           | --output=json --user=testUser                     |                                        |          |         |                               |                               |
	| stop      | -p                                                | json-output-20210810223047-30291       | testUser | v1.22.0 | Tue, 10 Aug 2021 22:32:14 UTC | Tue, 10 Aug 2021 22:32:22 UTC |
	|           | json-output-20210810223047-30291                  |                                        |          |         |                               |                               |
	|           | --output=json --user=testUser                     |                                        |          |         |                               |                               |
	| delete    | -p                                                | json-output-20210810223047-30291       | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:32:22 UTC | Tue, 10 Aug 2021 22:32:23 UTC |
	|           | json-output-20210810223047-30291                  |                                        |          |         |                               |                               |
	| delete    | -p                                                | json-output-error-20210810223223-30291 | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:32:23 UTC | Tue, 10 Aug 2021 22:32:23 UTC |
	|           | json-output-error-20210810223223-30291            |                                        |          |         |                               |                               |
	| start     | -p                                                | multinode-20210810223223-30291         | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:32:23 UTC | Tue, 10 Aug 2021 22:34:30 UTC |
	|           | multinode-20210810223223-30291                    |                                        |          |         |                               |                               |
	|           | --wait=true --memory=2200                         |                                        |          |         |                               |                               |
	|           | --nodes=2 -v=8                                    |                                        |          |         |                               |                               |
	|           | --alsologtostderr                                 |                                        |          |         |                               |                               |
	|           | --driver=kvm2                                     |                                        |          |         |                               |                               |
	|           | --container-runtime=crio                          |                                        |          |         |                               |                               |
	| kubectl   | -p multinode-20210810223223-30291 -- apply -f     | multinode-20210810223223-30291         | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:34:31 UTC | Tue, 10 Aug 2021 22:34:31 UTC |
	|           | ./testdata/multinodes/multinode-pod-dns-test.yaml |                                        |          |         |                               |                               |
	| kubectl   | -p                                                | multinode-20210810223223-30291         | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:34:31 UTC | Tue, 10 Aug 2021 22:34:37 UTC |
	|           | multinode-20210810223223-30291                    |                                        |          |         |                               |                               |
	|           | -- rollout status                                 |                                        |          |         |                               |                               |
	|           | deployment/busybox                                |                                        |          |         |                               |                               |
	| kubectl   | -p multinode-20210810223223-30291                 | multinode-20210810223223-30291         | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:34:38 UTC | Tue, 10 Aug 2021 22:34:38 UTC |
	|           | -- get pods -o                                    |                                        |          |         |                               |                               |
	|           | jsonpath='{.items[*].status.podIP}'               |                                        |          |         |                               |                               |
	| kubectl   | -p multinode-20210810223223-30291                 | multinode-20210810223223-30291         | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:34:38 UTC | Tue, 10 Aug 2021 22:34:38 UTC |
	|           | -- get pods -o                                    |                                        |          |         |                               |                               |
	|           | jsonpath='{.items[*].metadata.name}'              |                                        |          |         |                               |                               |
	| kubectl   | -p                                                | multinode-20210810223223-30291         | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:34:38 UTC | Tue, 10 Aug 2021 22:34:38 UTC |
	|           | multinode-20210810223223-30291                    |                                        |          |         |                               |                               |
	|           | -- exec                                           |                                        |          |         |                               |                               |
	|           | busybox-84b6686758-5h7gq --                       |                                        |          |         |                               |                               |
	|           | nslookup kubernetes.io                            |                                        |          |         |                               |                               |
	| kubectl   | -p                                                | multinode-20210810223223-30291         | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:35:39 UTC | Tue, 10 Aug 2021 22:35:39 UTC |
	|           | multinode-20210810223223-30291                    |                                        |          |         |                               |                               |
	|           | -- exec                                           |                                        |          |         |                               |                               |
	|           | busybox-84b6686758-5h7gq --                       |                                        |          |         |                               |                               |
	|           | nslookup kubernetes.default                       |                                        |          |         |                               |                               |
	| kubectl   | -p multinode-20210810223223-30291                 | multinode-20210810223223-30291         | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:36:39 UTC | Tue, 10 Aug 2021 22:36:39 UTC |
	|           | -- exec busybox-84b6686758-5h7gq                  |                                        |          |         |                               |                               |
	|           | -- nslookup                                       |                                        |          |         |                               |                               |
	|           | kubernetes.default.svc.cluster.local              |                                        |          |         |                               |                               |
	| -p        | multinode-20210810223223-30291                    | multinode-20210810223223-30291         | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:37:40 UTC | Tue, 10 Aug 2021 22:37:41 UTC |
	|           | logs -n 25                                        |                                        |          |         |                               |                               |
	| kubectl   | -p multinode-20210810223223-30291                 | multinode-20210810223223-30291         | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:37:42 UTC | Tue, 10 Aug 2021 22:37:42 UTC |
	|           | -- get pods -o                                    |                                        |          |         |                               |                               |
	|           | jsonpath='{.items[*].metadata.name}'              |                                        |          |         |                               |                               |
	| kubectl   | -p                                                | multinode-20210810223223-30291         | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:37:42 UTC | Tue, 10 Aug 2021 22:37:42 UTC |
	|           | multinode-20210810223223-30291                    |                                        |          |         |                               |                               |
	|           | -- exec                                           |                                        |          |         |                               |                               |
	|           | busybox-84b6686758-5h7gq                          |                                        |          |         |                               |                               |
	|           | -- sh -c nslookup                                 |                                        |          |         |                               |                               |
	|           | host.minikube.internal | awk                      |                                        |          |         |                               |                               |
	|           | 'NR==5' | cut -d' ' -f3                           |                                        |          |         |                               |                               |
	| kubectl   | -p                                                | multinode-20210810223223-30291         | jenkins  | v1.22.0 | Tue, 10 Aug 2021 22:37:43 UTC | Tue, 10 Aug 2021 22:38:43 UTC |
	|           | multinode-20210810223223-30291                    |                                        |          |         |                               |                               |
	|           | -- exec                                           |                                        |          |         |                               |                               |
	|           | busybox-84b6686758-nfzzk                          |                                        |          |         |                               |                               |
	|           | -- sh -c nslookup                                 |                                        |          |         |                               |                               |
	|           | host.minikube.internal | awk                      |                                        |          |         |                               |                               |
	|           | 'NR==5' | cut -d' ' -f3                           |                                        |          |         |                               |                               |
	|-----------|---------------------------------------------------|----------------------------------------|----------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/10 22:32:23
	Running on machine: debian-jenkins-agent-3
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0810 22:32:23.642564    4347 out.go:298] Setting OutFile to fd 1 ...
	I0810 22:32:23.642648    4347 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:32:23.642679    4347 out.go:311] Setting ErrFile to fd 2...
	I0810 22:32:23.642682    4347 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:32:23.642797    4347 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin
	I0810 22:32:23.643088    4347 out.go:305] Setting JSON to false
	I0810 22:32:23.678453    4347 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-3","uptime":8104,"bootTime":1628626640,"procs":153,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0810 22:32:23.678565    4347 start.go:121] virtualization: kvm guest
	I0810 22:32:23.681051    4347 out.go:177] * [multinode-20210810223223-30291] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0810 22:32:23.682514    4347 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0810 22:32:23.681216    4347 notify.go:169] Checking for updates...
	I0810 22:32:23.684022    4347 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0810 22:32:23.685360    4347 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	I0810 22:32:23.686753    4347 out.go:177]   - MINIKUBE_LOCATION=12230
	I0810 22:32:23.686947    4347 driver.go:335] Setting default libvirt URI to qemu:///system
	I0810 22:32:23.716577    4347 out.go:177] * Using the kvm2 driver based on user configuration
	I0810 22:32:23.716602    4347 start.go:278] selected driver: kvm2
	I0810 22:32:23.716608    4347 start.go:751] validating driver "kvm2" against <nil>
	I0810 22:32:23.716625    4347 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0810 22:32:23.717725    4347 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:32:23.717883    4347 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0810 22:32:23.728536    4347 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0810 22:32:23.728591    4347 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0810 22:32:23.728763    4347 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0810 22:32:23.728787    4347 cni.go:93] Creating CNI manager for ""
	I0810 22:32:23.728792    4347 cni.go:154] 0 nodes found, recommending kindnet
	I0810 22:32:23.728797    4347 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0810 22:32:23.728805    4347 start_flags.go:277] config:
	{Name:multinode-20210810223223-30291 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:multinode-20210810223223-30291 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0}
	I0810 22:32:23.728921    4347 iso.go:123] acquiring lock: {Name:mke8829815ca14456120fefc524d0a056bf82da0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:32:23.730742    4347 out.go:177] * Starting control plane node multinode-20210810223223-30291 in cluster multinode-20210810223223-30291
	I0810 22:32:23.730775    4347 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0810 22:32:23.730811    4347 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4
	I0810 22:32:23.730841    4347 cache.go:56] Caching tarball of preloaded images
	I0810 22:32:23.730956    4347 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0810 22:32:23.730986    4347 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on crio
	I0810 22:32:23.731372    4347 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/config.json ...
	I0810 22:32:23.731405    4347 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/config.json: {Name:mka5062e6b69c2d8df20f3df3953506ad4b5dcbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:32:23.731560    4347 cache.go:205] Successfully downloaded all kic artifacts
	I0810 22:32:23.731610    4347 start.go:313] acquiring machines lock for multinode-20210810223223-30291: {Name:mk9647f7c84b24381af0d3e731fd883065efc3b8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0810 22:32:23.731679    4347 start.go:317] acquired machines lock for "multinode-20210810223223-30291" in 43.504µs
	I0810 22:32:23.731711    4347 start.go:89] Provisioning new machine with config: &{Name:multinode-20210810223223-30291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12122/minikube-v1.22.0-1628238775-12122.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 C
lusterName:multinode-20210810223223-30291 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0810 22:32:23.731793    4347 start.go:126] createHost starting for "" (driver="kvm2")
	I0810 22:32:23.733836    4347 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0810 22:32:23.733948    4347 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:32:23.733987    4347 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:32:23.743753    4347 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44777
	I0810 22:32:23.744195    4347 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:32:23.744720    4347 main.go:130] libmachine: Using API Version  1
	I0810 22:32:23.744760    4347 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:32:23.745072    4347 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:32:23.745242    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetMachineName
	I0810 22:32:23.745380    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .DriverName
	I0810 22:32:23.745533    4347 start.go:160] libmachine.API.Create for "multinode-20210810223223-30291" (driver="kvm2")
	I0810 22:32:23.745567    4347 client.go:168] LocalClient.Create starting
	I0810 22:32:23.745604    4347 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem
	I0810 22:32:23.745632    4347 main.go:130] libmachine: Decoding PEM data...
	I0810 22:32:23.745654    4347 main.go:130] libmachine: Parsing certificate...
	I0810 22:32:23.745814    4347 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem
	I0810 22:32:23.745839    4347 main.go:130] libmachine: Decoding PEM data...
	I0810 22:32:23.745863    4347 main.go:130] libmachine: Parsing certificate...
	I0810 22:32:23.745921    4347 main.go:130] libmachine: Running pre-create checks...
	I0810 22:32:23.745934    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .PreCreateCheck
	I0810 22:32:23.746288    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetConfigRaw
	I0810 22:32:23.746660    4347 main.go:130] libmachine: Creating machine...
	I0810 22:32:23.746679    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .Create
	I0810 22:32:23.746779    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Creating KVM machine...
	I0810 22:32:23.749187    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | found existing default KVM network
	I0810 22:32:23.750119    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | I0810 22:32:23.749981    4371 network.go:240] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:b4:06:d9}}
	I0810 22:32:23.750960    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | I0810 22:32:23.750886    4371 network.go:288] reserving subnet 192.168.50.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.50.0:0xc0000a05e0] misses:0}
	I0810 22:32:23.750994    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | I0810 22:32:23.750929    4371 network.go:235] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0810 22:32:23.778684    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | trying to create private KVM network mk-multinode-20210810223223-30291 192.168.50.0/24...
	I0810 22:32:24.044571    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | private KVM network mk-multinode-20210810223223-30291 192.168.50.0/24 created
	I0810 22:32:24.044622    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Setting up store path in /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223223-30291 ...
	I0810 22:32:24.044644    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | I0810 22:32:24.044521    4371 common.go:108] Making disk image using store path: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	I0810 22:32:24.044669    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Building disk image from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/iso/minikube-v1.22.0-1628238775-12122.iso
	I0810 22:32:24.044705    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Downloading /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/iso/minikube-v1.22.0-1628238775-12122.iso...
	I0810 22:32:24.253986    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | I0810 22:32:24.253861    4371 common.go:115] Creating ssh key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223223-30291/id_rsa...
	I0810 22:32:24.632302    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | I0810 22:32:24.632164    4371 common.go:121] Creating raw disk image: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223223-30291/multinode-20210810223223-30291.rawdisk...
	I0810 22:32:24.632334    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | Writing magic tar header
	I0810 22:32:24.632352    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | Writing SSH key tar header
	I0810 22:32:24.632366    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | I0810 22:32:24.632275    4371 common.go:135] Fixing permissions on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223223-30291 ...
	I0810 22:32:24.632388    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223223-30291
	I0810 22:32:24.632407    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines
	I0810 22:32:24.632418    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	I0810 22:32:24.632436    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223223-30291 (perms=drwx------)
	I0810 22:32:24.632455    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0
	I0810 22:32:24.632471    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines (perms=drwxr-xr-x)
	I0810 22:32:24.632499    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube (perms=drwxr-xr-x)
	I0810 22:32:24.632520    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0 (perms=drwxr-xr-x)
	I0810 22:32:24.632536    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0810 22:32:24.632555    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | Checking permissions on dir: /home/jenkins
	I0810 22:32:24.632564    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | Checking permissions on dir: /home
	I0810 22:32:24.632574    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | Skipping /home - not owner
	I0810 22:32:24.632613    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxr-xr-x)
	I0810 22:32:24.632634    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0810 22:32:24.632643    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Creating domain...
	I0810 22:32:24.657516    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:67:0e:c9 in network default
	I0810 22:32:24.657949    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:24.657965    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Ensuring networks are active...
	I0810 22:32:24.659918    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Ensuring network default is active
	I0810 22:32:24.660178    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Ensuring network mk-multinode-20210810223223-30291 is active
	I0810 22:32:24.660673    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Getting domain xml...
	I0810 22:32:24.662584    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Creating domain...
	I0810 22:32:25.061208    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Waiting to get IP...
	I0810 22:32:25.062227    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:25.062647    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | unable to find current IP address of domain multinode-20210810223223-30291 in network mk-multinode-20210810223223-30291
	I0810 22:32:25.062677    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | I0810 22:32:25.062606    4371 retry.go:31] will retry after 263.082536ms: waiting for machine to come up
	I0810 22:32:25.327001    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:25.327503    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | unable to find current IP address of domain multinode-20210810223223-30291 in network mk-multinode-20210810223223-30291
	I0810 22:32:25.327552    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | I0810 22:32:25.327457    4371 retry.go:31] will retry after 381.329545ms: waiting for machine to come up
	I0810 22:32:25.710117    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:25.710642    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | unable to find current IP address of domain multinode-20210810223223-30291 in network mk-multinode-20210810223223-30291
	I0810 22:32:25.710666    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | I0810 22:32:25.710598    4371 retry.go:31] will retry after 422.765636ms: waiting for machine to come up
	I0810 22:32:26.135175    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:26.135649    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | unable to find current IP address of domain multinode-20210810223223-30291 in network mk-multinode-20210810223223-30291
	I0810 22:32:26.135679    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | I0810 22:32:26.135601    4371 retry.go:31] will retry after 473.074753ms: waiting for machine to come up
	I0810 22:32:26.609721    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:26.610222    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | unable to find current IP address of domain multinode-20210810223223-30291 in network mk-multinode-20210810223223-30291
	I0810 22:32:26.610270    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | I0810 22:32:26.610183    4371 retry.go:31] will retry after 587.352751ms: waiting for machine to come up
	I0810 22:32:27.198598    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:27.199034    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | unable to find current IP address of domain multinode-20210810223223-30291 in network mk-multinode-20210810223223-30291
	I0810 22:32:27.199064    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | I0810 22:32:27.198988    4371 retry.go:31] will retry after 834.206799ms: waiting for machine to come up
	I0810 22:32:28.034899    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:28.035375    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | unable to find current IP address of domain multinode-20210810223223-30291 in network mk-multinode-20210810223223-30291
	I0810 22:32:28.035403    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | I0810 22:32:28.035332    4371 retry.go:31] will retry after 746.553905ms: waiting for machine to come up
	I0810 22:32:28.783767    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:28.784218    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | unable to find current IP address of domain multinode-20210810223223-30291 in network mk-multinode-20210810223223-30291
	I0810 22:32:28.784247    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | I0810 22:32:28.784180    4371 retry.go:31] will retry after 987.362415ms: waiting for machine to come up
	I0810 22:32:29.773313    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:29.773753    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | unable to find current IP address of domain multinode-20210810223223-30291 in network mk-multinode-20210810223223-30291
	I0810 22:32:29.773784    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | I0810 22:32:29.773699    4371 retry.go:31] will retry after 1.189835008s: waiting for machine to come up
	I0810 22:32:30.964875    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:30.965436    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | unable to find current IP address of domain multinode-20210810223223-30291 in network mk-multinode-20210810223223-30291
	I0810 22:32:30.965466    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | I0810 22:32:30.965382    4371 retry.go:31] will retry after 1.677229867s: waiting for machine to come up
	I0810 22:32:32.643666    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:32.644156    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | unable to find current IP address of domain multinode-20210810223223-30291 in network mk-multinode-20210810223223-30291
	I0810 22:32:32.644191    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | I0810 22:32:32.644079    4371 retry.go:31] will retry after 2.346016261s: waiting for machine to come up
	I0810 22:32:34.992160    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:34.992686    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | unable to find current IP address of domain multinode-20210810223223-30291 in network mk-multinode-20210810223223-30291
	I0810 22:32:34.992713    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | I0810 22:32:34.992658    4371 retry.go:31] will retry after 3.36678925s: waiting for machine to come up
	I0810 22:32:38.361097    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:38.361595    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Found IP for machine: 192.168.50.32
	I0810 22:32:38.361618    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Reserving static IP address...
	I0810 22:32:38.361631    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has current primary IP address 192.168.50.32 and MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:38.362042    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | unable to find host DHCP lease matching {name: "multinode-20210810223223-30291", mac: "52:54:00:ce:d8:89", ip: "192.168.50.32"} in network mk-multinode-20210810223223-30291
	I0810 22:32:38.408232    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Reserved static IP address: 192.168.50.32
	I0810 22:32:38.408261    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Waiting for SSH to be available...
	I0810 22:32:38.408283    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | Getting to WaitForSSH function...
	I0810 22:32:38.414513    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:38.414911    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:89", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:32:38 +0000 UTC Type:0 Mac:52:54:00:ce:d8:89 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ce:d8:89}
	I0810 22:32:38.414940    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined IP address 192.168.50.32 and MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:38.415095    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | Using SSH client type: external
	I0810 22:32:38.415120    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223223-30291/id_rsa (-rw-------)
	I0810 22:32:38.415155    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.32 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223223-30291/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0810 22:32:38.415167    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | About to run SSH command:
	I0810 22:32:38.415206    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | exit 0
	I0810 22:32:38.567481    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | SSH cmd err, output: <nil>: 
	I0810 22:32:38.567975    4347 main.go:130] libmachine: (multinode-20210810223223-30291) KVM machine creation complete!
	I0810 22:32:38.568047    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetConfigRaw
	I0810 22:32:38.568706    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .DriverName
	I0810 22:32:38.568919    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .DriverName
	I0810 22:32:38.569094    4347 main.go:130] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0810 22:32:38.569114    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetState
	I0810 22:32:38.571674    4347 main.go:130] libmachine: Detecting operating system of created instance...
	I0810 22:32:38.571689    4347 main.go:130] libmachine: Waiting for SSH to be available...
	I0810 22:32:38.571696    4347 main.go:130] libmachine: Getting to WaitForSSH function...
	I0810 22:32:38.571706    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHHostname
	I0810 22:32:38.576433    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:38.576718    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:89", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:32:38 +0000 UTC Type:0 Mac:52:54:00:ce:d8:89 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:multinode-20210810223223-30291 Clientid:01:52:54:00:ce:d8:89}
	I0810 22:32:38.576750    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined IP address 192.168.50.32 and MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:38.576945    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHPort
	I0810 22:32:38.577117    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHKeyPath
	I0810 22:32:38.577263    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHKeyPath
	I0810 22:32:38.577414    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHUsername
	I0810 22:32:38.577575    4347 main.go:130] libmachine: Using SSH client type: native
	I0810 22:32:38.577853    4347 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0810 22:32:38.577871    4347 main.go:130] libmachine: About to run SSH command:
	exit 0
	I0810 22:32:38.687548    4347 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0810 22:32:38.875986    4347 main.go:130] libmachine: Detecting the provisioner...
	I0810 22:32:38.876009    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHHostname
	I0810 22:32:38.881035    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:38.881355    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:89", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:32:38 +0000 UTC Type:0 Mac:52:54:00:ce:d8:89 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:multinode-20210810223223-30291 Clientid:01:52:54:00:ce:d8:89}
	I0810 22:32:38.881387    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined IP address 192.168.50.32 and MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:38.881491    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHPort
	I0810 22:32:38.881675    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHKeyPath
	I0810 22:32:38.881835    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHKeyPath
	I0810 22:32:38.881958    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHUsername
	I0810 22:32:38.882095    4347 main.go:130] libmachine: Using SSH client type: native
	I0810 22:32:38.882264    4347 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0810 22:32:38.882278    4347 main.go:130] libmachine: About to run SSH command:
	cat /etc/os-release
	I0810 22:32:38.992654    4347 main.go:130] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2020.02.12
	ID=buildroot
	VERSION_ID=2020.02.12
	PRETTY_NAME="Buildroot 2020.02.12"
	
	I0810 22:32:38.992768    4347 main.go:130] libmachine: found compatible host: buildroot
	I0810 22:32:38.992784    4347 main.go:130] libmachine: Provisioning with buildroot...
	I0810 22:32:38.992796    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetMachineName
	I0810 22:32:38.993055    4347 buildroot.go:166] provisioning hostname "multinode-20210810223223-30291"
	I0810 22:32:38.993084    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetMachineName
	I0810 22:32:38.993282    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHHostname
	I0810 22:32:38.998123    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:38.998402    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:89", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:32:38 +0000 UTC Type:0 Mac:52:54:00:ce:d8:89 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:multinode-20210810223223-30291 Clientid:01:52:54:00:ce:d8:89}
	I0810 22:32:38.998451    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined IP address 192.168.50.32 and MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:38.998562    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHPort
	I0810 22:32:38.998734    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHKeyPath
	I0810 22:32:38.998865    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHKeyPath
	I0810 22:32:38.999026    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHUsername
	I0810 22:32:38.999208    4347 main.go:130] libmachine: Using SSH client type: native
	I0810 22:32:38.999353    4347 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0810 22:32:38.999367    4347 main.go:130] libmachine: About to run SSH command:
	sudo hostname multinode-20210810223223-30291 && echo "multinode-20210810223223-30291" | sudo tee /etc/hostname
	I0810 22:32:39.116794    4347 main.go:130] libmachine: SSH cmd err, output: <nil>: multinode-20210810223223-30291
	
	I0810 22:32:39.116835    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHHostname
	I0810 22:32:39.122121    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:39.122423    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:89", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:32:38 +0000 UTC Type:0 Mac:52:54:00:ce:d8:89 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:multinode-20210810223223-30291 Clientid:01:52:54:00:ce:d8:89}
	I0810 22:32:39.122461    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined IP address 192.168.50.32 and MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:39.122570    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHPort
	I0810 22:32:39.122760    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHKeyPath
	I0810 22:32:39.122917    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHKeyPath
	I0810 22:32:39.123059    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHUsername
	I0810 22:32:39.123252    4347 main.go:130] libmachine: Using SSH client type: native
	I0810 22:32:39.123425    4347 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0810 22:32:39.123450    4347 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-20210810223223-30291' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-20210810223223-30291/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-20210810223223-30291' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0810 22:32:39.239162    4347 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0810 22:32:39.239207    4347 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem ServerCertRemotePath:/etc/
docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube}
	I0810 22:32:39.239264    4347 buildroot.go:174] setting up certificates
	I0810 22:32:39.239277    4347 provision.go:83] configureAuth start
	I0810 22:32:39.239293    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetMachineName
	I0810 22:32:39.239588    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetIP
	I0810 22:32:39.244650    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:39.244943    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:89", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:32:38 +0000 UTC Type:0 Mac:52:54:00:ce:d8:89 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:multinode-20210810223223-30291 Clientid:01:52:54:00:ce:d8:89}
	I0810 22:32:39.244982    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined IP address 192.168.50.32 and MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:39.245047    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHHostname
	I0810 22:32:39.249030    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:39.249296    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:89", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:32:38 +0000 UTC Type:0 Mac:52:54:00:ce:d8:89 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:multinode-20210810223223-30291 Clientid:01:52:54:00:ce:d8:89}
	I0810 22:32:39.249320    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined IP address 192.168.50.32 and MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:39.249427    4347 provision.go:137] copyHostCerts
	I0810 22:32:39.249459    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem
	I0810 22:32:39.249497    4347 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem, removing ...
	I0810 22:32:39.249520    4347 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem
	I0810 22:32:39.249578    4347 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem (1082 bytes)
	I0810 22:32:39.249646    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem
	I0810 22:32:39.249665    4347 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem, removing ...
	I0810 22:32:39.249672    4347 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem
	I0810 22:32:39.249694    4347 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem (1123 bytes)
	I0810 22:32:39.249730    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem
	I0810 22:32:39.249746    4347 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem, removing ...
	I0810 22:32:39.249753    4347 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem
	I0810 22:32:39.249769    4347 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem (1679 bytes)
	I0810 22:32:39.249808    4347 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem org=jenkins.multinode-20210810223223-30291 san=[192.168.50.32 192.168.50.32 localhost 127.0.0.1 minikube multinode-20210810223223-30291]
	I0810 22:32:39.467963    4347 provision.go:171] copyRemoteCerts
	I0810 22:32:39.468023    4347 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0810 22:32:39.468053    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHHostname
	I0810 22:32:39.473286    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:39.473570    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:89", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:32:38 +0000 UTC Type:0 Mac:52:54:00:ce:d8:89 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:multinode-20210810223223-30291 Clientid:01:52:54:00:ce:d8:89}
	I0810 22:32:39.473595    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined IP address 192.168.50.32 and MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:39.473746    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHPort
	I0810 22:32:39.473942    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHKeyPath
	I0810 22:32:39.474074    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHUsername
	I0810 22:32:39.474183    4347 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223223-30291/id_rsa Username:docker}
	I0810 22:32:39.555442    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0810 22:32:39.555506    4347 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0810 22:32:39.572241    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0810 22:32:39.572317    4347 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem --> /etc/docker/server.pem (1261 bytes)
	I0810 22:32:39.588495    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0810 22:32:39.588544    4347 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0810 22:32:39.604369    4347 provision.go:86] duration metric: configureAuth took 365.078335ms
	I0810 22:32:39.604394    4347 buildroot.go:189] setting minikube options for container-runtime
	I0810 22:32:39.604669    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHHostname
	I0810 22:32:39.610136    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:39.610474    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:89", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:32:38 +0000 UTC Type:0 Mac:52:54:00:ce:d8:89 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:multinode-20210810223223-30291 Clientid:01:52:54:00:ce:d8:89}
	I0810 22:32:39.610523    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined IP address 192.168.50.32 and MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:39.610637    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHPort
	I0810 22:32:39.610859    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHKeyPath
	I0810 22:32:39.611024    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHKeyPath
	I0810 22:32:39.611190    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHUsername
	I0810 22:32:39.611341    4347 main.go:130] libmachine: Using SSH client type: native
	I0810 22:32:39.611480    4347 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0810 22:32:39.611494    4347 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0810 22:32:40.330506    4347 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0810 22:32:40.330536    4347 main.go:130] libmachine: Checking connection to Docker...
	I0810 22:32:40.330544    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetURL
	I0810 22:32:40.333397    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | Using libvirt version 3000000
	I0810 22:32:40.337733    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:40.338027    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:89", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:32:38 +0000 UTC Type:0 Mac:52:54:00:ce:d8:89 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:multinode-20210810223223-30291 Clientid:01:52:54:00:ce:d8:89}
	I0810 22:32:40.338052    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined IP address 192.168.50.32 and MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:40.338195    4347 main.go:130] libmachine: Docker is up and running!
	I0810 22:32:40.338211    4347 main.go:130] libmachine: Reticulating splines...
	I0810 22:32:40.338220    4347 client.go:171] LocalClient.Create took 16.592642463s
	I0810 22:32:40.338240    4347 start.go:168] duration metric: libmachine.API.Create for "multinode-20210810223223-30291" took 16.592708779s
	I0810 22:32:40.338252    4347 start.go:267] post-start starting for "multinode-20210810223223-30291" (driver="kvm2")
	I0810 22:32:40.338260    4347 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0810 22:32:40.338278    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .DriverName
	I0810 22:32:40.338513    4347 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0810 22:32:40.338547    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHHostname
	I0810 22:32:40.342637    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:40.342919    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:89", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:32:38 +0000 UTC Type:0 Mac:52:54:00:ce:d8:89 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:multinode-20210810223223-30291 Clientid:01:52:54:00:ce:d8:89}
	I0810 22:32:40.342950    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined IP address 192.168.50.32 and MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:40.343050    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHPort
	I0810 22:32:40.343223    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHKeyPath
	I0810 22:32:40.343349    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHUsername
	I0810 22:32:40.343473    4347 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223223-30291/id_rsa Username:docker}
	I0810 22:32:40.427667    4347 ssh_runner.go:149] Run: cat /etc/os-release
	I0810 22:32:40.432262    4347 command_runner.go:124] > NAME=Buildroot
	I0810 22:32:40.432281    4347 command_runner.go:124] > VERSION=2020.02.12
	I0810 22:32:40.432286    4347 command_runner.go:124] > ID=buildroot
	I0810 22:32:40.432291    4347 command_runner.go:124] > VERSION_ID=2020.02.12
	I0810 22:32:40.432296    4347 command_runner.go:124] > PRETTY_NAME="Buildroot 2020.02.12"
	I0810 22:32:40.432320    4347 info.go:137] Remote host: Buildroot 2020.02.12
	I0810 22:32:40.432332    4347 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/addons for local assets ...
	I0810 22:32:40.432388    4347 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files for local assets ...
	I0810 22:32:40.432517    4347 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/302912.pem -> 302912.pem in /etc/ssl/certs
	I0810 22:32:40.432532    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/302912.pem -> /etc/ssl/certs/302912.pem
	I0810 22:32:40.432637    4347 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0810 22:32:40.439152    4347 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/302912.pem --> /etc/ssl/certs/302912.pem (1708 bytes)
	I0810 22:32:40.455474    4347 start.go:270] post-start completed in 117.207443ms
	I0810 22:32:40.455530    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetConfigRaw
	I0810 22:32:40.456189    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetIP
	I0810 22:32:40.461322    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:40.461613    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:89", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:32:38 +0000 UTC Type:0 Mac:52:54:00:ce:d8:89 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:multinode-20210810223223-30291 Clientid:01:52:54:00:ce:d8:89}
	I0810 22:32:40.461647    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined IP address 192.168.50.32 and MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:40.461845    4347 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/config.json ...
	I0810 22:32:40.462059    4347 start.go:129] duration metric: createHost completed in 16.730255053s
	I0810 22:32:40.462072    4347 start.go:80] releasing machines lock for "multinode-20210810223223-30291", held for 16.730377621s
	I0810 22:32:40.462110    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .DriverName
	I0810 22:32:40.462318    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetIP
	I0810 22:32:40.466485    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:40.466754    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:89", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:32:38 +0000 UTC Type:0 Mac:52:54:00:ce:d8:89 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:multinode-20210810223223-30291 Clientid:01:52:54:00:ce:d8:89}
	I0810 22:32:40.466784    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined IP address 192.168.50.32 and MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:40.466869    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .DriverName
	I0810 22:32:40.467033    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .DriverName
	I0810 22:32:40.467493    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .DriverName
	I0810 22:32:40.467673    4347 ssh_runner.go:149] Run: systemctl --version
	I0810 22:32:40.467695    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHHostname
	I0810 22:32:40.467734    4347 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0810 22:32:40.467780    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHHostname
	I0810 22:32:40.472281    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:40.472559    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:89", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:32:38 +0000 UTC Type:0 Mac:52:54:00:ce:d8:89 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:multinode-20210810223223-30291 Clientid:01:52:54:00:ce:d8:89}
	I0810 22:32:40.472592    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined IP address 192.168.50.32 and MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:40.472682    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHPort
	I0810 22:32:40.472861    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHKeyPath
	I0810 22:32:40.472991    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHUsername
	I0810 22:32:40.473104    4347 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223223-30291/id_rsa Username:docker}
	I0810 22:32:40.473367    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:40.473722    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:89", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:32:38 +0000 UTC Type:0 Mac:52:54:00:ce:d8:89 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:multinode-20210810223223-30291 Clientid:01:52:54:00:ce:d8:89}
	I0810 22:32:40.473751    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined IP address 192.168.50.32 and MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:40.473902    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHPort
	I0810 22:32:40.474067    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHKeyPath
	I0810 22:32:40.474208    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHUsername
	I0810 22:32:40.474330    4347 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223223-30291/id_rsa Username:docker}
	I0810 22:32:40.562649    4347 command_runner.go:124] > <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
	I0810 22:32:40.562674    4347 command_runner.go:124] > <TITLE>302 Moved</TITLE></HEAD><BODY>
	I0810 22:32:40.562679    4347 command_runner.go:124] > <H1>302 Moved</H1>
	I0810 22:32:40.562683    4347 command_runner.go:124] > The document has moved
	I0810 22:32:40.562689    4347 command_runner.go:124] > <A HREF="https://cloud.google.com/container-registry/">here</A>.
	I0810 22:32:40.562693    4347 command_runner.go:124] > </BODY></HTML>
	I0810 22:32:40.563259    4347 command_runner.go:124] > systemd 244 (244)
	I0810 22:32:40.563292    4347 command_runner.go:124] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK +SYSVINIT +UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0810 22:32:40.563336    4347 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0810 22:32:40.563456    4347 ssh_runner.go:149] Run: sudo crictl images --output json
	I0810 22:32:40.586443    4347 command_runner.go:124] ! time="2021-08-10T22:32:40Z" level=warning msg="image connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	I0810 22:32:42.568998    4347 command_runner.go:124] ! time="2021-08-10T22:32:42Z" level=error msg="connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	I0810 22:32:44.556916    4347 command_runner.go:124] ! time="2021-08-10T22:32:44Z" level=error msg="connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	I0810 22:32:44.561313    4347 command_runner.go:124] > {
	I0810 22:32:44.561332    4347 command_runner.go:124] >   "images": [
	I0810 22:32:44.561337    4347 command_runner.go:124] >   ]
	I0810 22:32:44.561342    4347 command_runner.go:124] > }
	I0810 22:32:44.561562    4347 ssh_runner.go:189] Completed: sudo crictl images --output json: (3.998071829s)
	I0810 22:32:44.561692    4347 crio.go:420] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.21.3". assuming images are not preloaded.
	I0810 22:32:44.561754    4347 ssh_runner.go:149] Run: which lz4
	I0810 22:32:44.565778    4347 command_runner.go:124] > /bin/lz4
	I0810 22:32:44.565804    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0810 22:32:44.565879    4347 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0810 22:32:44.570036    4347 command_runner.go:124] ! stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0810 22:32:44.570401    4347 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0810 22:32:44.570426    4347 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (576184326 bytes)
	I0810 22:32:47.905613    4347 crio.go:362] Took 3.339764 seconds to copy over tarball
	I0810 22:32:47.905754    4347 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0810 22:32:52.596739    4347 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (4.69094083s)
	I0810 22:32:52.596771    4347 crio.go:369] Took 4.691121 seconds t extract the tarball
	I0810 22:32:52.596783    4347 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0810 22:32:52.635335    4347 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0810 22:32:52.647855    4347 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0810 22:32:52.657276    4347 docker.go:153] disabling docker service ...
	I0810 22:32:52.657334    4347 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0810 22:32:52.667758    4347 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0810 22:32:52.676016    4347 command_runner.go:124] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0810 22:32:52.676714    4347 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0810 22:32:52.685844    4347 command_runner.go:124] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0810 22:32:52.852034    4347 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0810 22:32:52.862179    4347 command_runner.go:124] ! Unit docker.service does not exist, proceeding anyway.
	I0810 22:32:52.862670    4347 command_runner.go:124] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0810 22:32:52.992489    4347 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0810 22:32:53.003425    4347 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0810 22:32:53.017245    4347 command_runner.go:124] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0810 22:32:53.017271    4347 command_runner.go:124] > image-endpoint: unix:///var/run/crio/crio.sock
	I0810 22:32:53.017304    4347 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0810 22:32:53.024802    4347 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0810 22:32:53.024841    4347 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0810 22:32:53.032539    4347 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0810 22:32:53.038986    4347 command_runner.go:124] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0810 22:32:53.039360    4347 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0810 22:32:53.039419    4347 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0810 22:32:53.055322    4347 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0810 22:32:53.062095    4347 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0810 22:32:53.193612    4347 ssh_runner.go:149] Run: sudo systemctl start crio
	I0810 22:32:53.302005    4347 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0810 22:32:53.302098    4347 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0810 22:32:53.307766    4347 command_runner.go:124] >   File: /var/run/crio/crio.sock
	I0810 22:32:53.307792    4347 command_runner.go:124] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0810 22:32:53.307803    4347 command_runner.go:124] > Device: 14h/20d	Inode: 29710       Links: 1
	I0810 22:32:53.307813    4347 command_runner.go:124] > Access: (0755/srwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0810 22:32:53.307821    4347 command_runner.go:124] > Access: 2021-08-10 22:32:44.505329266 +0000
	I0810 22:32:53.307830    4347 command_runner.go:124] > Modify: 2021-08-10 22:32:40.217909913 +0000
	I0810 22:32:53.307837    4347 command_runner.go:124] > Change: 2021-08-10 22:32:40.217909913 +0000
	I0810 22:32:53.307843    4347 command_runner.go:124] >  Birth: -
	I0810 22:32:53.307897    4347 start.go:417] Will wait 60s for crictl version
	I0810 22:32:53.307955    4347 ssh_runner.go:149] Run: sudo crictl version
	I0810 22:32:53.338312    4347 command_runner.go:124] > Version:  0.1.0
	I0810 22:32:53.338337    4347 command_runner.go:124] > RuntimeName:  cri-o
	I0810 22:32:53.338343    4347 command_runner.go:124] > RuntimeVersion:  1.20.2
	I0810 22:32:53.338350    4347 command_runner.go:124] > RuntimeApiVersion:  v1alpha1
	I0810 22:32:53.338599    4347 start.go:426] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.2
	RuntimeApiVersion:  v1alpha1
	I0810 22:32:53.338687    4347 ssh_runner.go:149] Run: crio --version
	I0810 22:32:53.605450    4347 command_runner.go:124] ! time="2021-08-10T22:32:53Z" level=info msg="Starting CRI-O, version: 1.20.2, git: d5a999ad0a35d895ded554e1e18c142075501a98(clean)"
	I0810 22:32:57.305530    4347 command_runner.go:124] > crio version 1.20.2
	I0810 22:32:57.305545    4347 command_runner.go:124] > Version:       1.20.2
	I0810 22:32:57.305552    4347 command_runner.go:124] > GitCommit:     d5a999ad0a35d895ded554e1e18c142075501a98
	I0810 22:32:57.305560    4347 command_runner.go:124] > GitTreeState:  clean
	I0810 22:32:57.305566    4347 command_runner.go:124] > BuildDate:     2021-08-06T09:19:16Z
	I0810 22:32:57.305570    4347 command_runner.go:124] > GoVersion:     go1.13.15
	I0810 22:32:57.305574    4347 command_runner.go:124] > Compiler:      gc
	I0810 22:32:57.305579    4347 command_runner.go:124] > Platform:      linux/amd64
	I0810 22:32:57.305600    4347 ssh_runner.go:189] Completed: crio --version: (3.966890142s)
	I0810 22:32:57.305670    4347 ssh_runner.go:149] Run: crio --version
	I0810 22:32:57.513335    4347 command_runner.go:124] ! time="2021-08-10T22:32:57Z" level=info msg="Starting CRI-O, version: 1.20.2, git: d5a999ad0a35d895ded554e1e18c142075501a98(clean)"
	I0810 22:32:57.515123    4347 command_runner.go:124] > crio version 1.20.2
	I0810 22:32:57.515146    4347 command_runner.go:124] > Version:       1.20.2
	I0810 22:32:57.515157    4347 command_runner.go:124] > GitCommit:     d5a999ad0a35d895ded554e1e18c142075501a98
	I0810 22:32:57.515164    4347 command_runner.go:124] > GitTreeState:  clean
	I0810 22:32:57.515173    4347 command_runner.go:124] > BuildDate:     2021-08-06T09:19:16Z
	I0810 22:32:57.515180    4347 command_runner.go:124] > GoVersion:     go1.13.15
	I0810 22:32:57.515190    4347 command_runner.go:124] > Compiler:      gc
	I0810 22:32:57.515197    4347 command_runner.go:124] > Platform:      linux/amd64
	I0810 22:32:57.523162    4347 out.go:177] * Preparing Kubernetes v1.21.3 on CRI-O 1.20.2 ...
	I0810 22:32:57.523248    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetIP
	I0810 22:32:57.528697    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:57.529018    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:89", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:32:38 +0000 UTC Type:0 Mac:52:54:00:ce:d8:89 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:multinode-20210810223223-30291 Clientid:01:52:54:00:ce:d8:89}
	I0810 22:32:57.529045    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined IP address 192.168.50.32 and MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:32:57.529208    4347 ssh_runner.go:149] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0810 22:32:57.533352    4347 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0810 22:32:57.543504    4347 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0810 22:32:57.543552    4347 ssh_runner.go:149] Run: sudo crictl images --output json
	I0810 22:32:57.612368    4347 command_runner.go:124] > {
	I0810 22:32:57.612397    4347 command_runner.go:124] >   "images": [
	I0810 22:32:57.612404    4347 command_runner.go:124] >     {
	I0810 22:32:57.612416    4347 command_runner.go:124] >       "id": "6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb",
	I0810 22:32:57.612427    4347 command_runner.go:124] >       "repoTags": [
	I0810 22:32:57.612437    4347 command_runner.go:124] >         "docker.io/kindest/kindnetd:v20210326-1e038dc5"
	I0810 22:32:57.612442    4347 command_runner.go:124] >       ],
	I0810 22:32:57.612449    4347 command_runner.go:124] >       "repoDigests": [
	I0810 22:32:57.612462    4347 command_runner.go:124] >         "docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1",
	I0810 22:32:57.612479    4347 command_runner.go:124] >         "docker.io/kindest/kindnetd@sha256:838bc1706e38391aefaa31fd52619fe8e57ad3dfb0d0ff414d902367fcc24c3c"
	I0810 22:32:57.612488    4347 command_runner.go:124] >       ],
	I0810 22:32:57.612495    4347 command_runner.go:124] >       "size": "119984626",
	I0810 22:32:57.612506    4347 command_runner.go:124] >       "uid": null,
	I0810 22:32:57.612515    4347 command_runner.go:124] >       "username": "",
	I0810 22:32:57.612524    4347 command_runner.go:124] >       "spec": null
	I0810 22:32:57.612530    4347 command_runner.go:124] >     },
	I0810 22:32:57.612535    4347 command_runner.go:124] >     {
	I0810 22:32:57.612547    4347 command_runner.go:124] >       "id": "9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db",
	I0810 22:32:57.612556    4347 command_runner.go:124] >       "repoTags": [
	I0810 22:32:57.612564    4347 command_runner.go:124] >         "docker.io/kubernetesui/dashboard:v2.1.0"
	I0810 22:32:57.612572    4347 command_runner.go:124] >       ],
	I0810 22:32:57.612579    4347 command_runner.go:124] >       "repoDigests": [
	I0810 22:32:57.612594    4347 command_runner.go:124] >         "docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f",
	I0810 22:32:57.612610    4347 command_runner.go:124] >         "docker.io/kubernetesui/dashboard@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6"
	I0810 22:32:57.612619    4347 command_runner.go:124] >       ],
	I0810 22:32:57.612625    4347 command_runner.go:124] >       "size": "228528983",
	I0810 22:32:57.612634    4347 command_runner.go:124] >       "uid": null,
	I0810 22:32:57.612641    4347 command_runner.go:124] >       "username": "nonroot",
	I0810 22:32:57.612652    4347 command_runner.go:124] >       "spec": null
	I0810 22:32:57.612658    4347 command_runner.go:124] >     },
	I0810 22:32:57.612663    4347 command_runner.go:124] >     {
	I0810 22:32:57.612687    4347 command_runner.go:124] >       "id": "86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4",
	I0810 22:32:57.612697    4347 command_runner.go:124] >       "repoTags": [
	I0810 22:32:57.612707    4347 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper:v1.0.4"
	I0810 22:32:57.612715    4347 command_runner.go:124] >       ],
	I0810 22:32:57.612722    4347 command_runner.go:124] >       "repoDigests": [
	I0810 22:32:57.612737    4347 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper@sha256:555981a24f184420f3be0c79d4efb6c948a85cfce84034f85a563f4151a81cbf",
	I0810 22:32:57.612753    4347 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper@sha256:d78f995c07124874c2a2e9b404cffa6bc6233668d63d6c6210574971f3d5914b"
	I0810 22:32:57.612761    4347 command_runner.go:124] >       ],
	I0810 22:32:57.612767    4347 command_runner.go:124] >       "size": "36950651",
	I0810 22:32:57.612774    4347 command_runner.go:124] >       "uid": null,
	I0810 22:32:57.612780    4347 command_runner.go:124] >       "username": "",
	I0810 22:32:57.612789    4347 command_runner.go:124] >       "spec": null
	I0810 22:32:57.612813    4347 command_runner.go:124] >     },
	I0810 22:32:57.612823    4347 command_runner.go:124] >     {
	I0810 22:32:57.612834    4347 command_runner.go:124] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0810 22:32:57.612845    4347 command_runner.go:124] >       "repoTags": [
	I0810 22:32:57.612851    4347 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0810 22:32:57.612857    4347 command_runner.go:124] >       ],
	I0810 22:32:57.612861    4347 command_runner.go:124] >       "repoDigests": [
	I0810 22:32:57.612870    4347 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0810 22:32:57.612881    4347 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0810 22:32:57.612887    4347 command_runner.go:124] >       ],
	I0810 22:32:57.612891    4347 command_runner.go:124] >       "size": "31470524",
	I0810 22:32:57.612898    4347 command_runner.go:124] >       "uid": null,
	I0810 22:32:57.612902    4347 command_runner.go:124] >       "username": "",
	I0810 22:32:57.612908    4347 command_runner.go:124] >       "spec": null
	I0810 22:32:57.612911    4347 command_runner.go:124] >     },
	I0810 22:32:57.612914    4347 command_runner.go:124] >     {
	I0810 22:32:57.612921    4347 command_runner.go:124] >       "id": "296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899",
	I0810 22:32:57.612927    4347 command_runner.go:124] >       "repoTags": [
	I0810 22:32:57.612933    4347 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns:v1.8.0"
	I0810 22:32:57.612939    4347 command_runner.go:124] >       ],
	I0810 22:32:57.612943    4347 command_runner.go:124] >       "repoDigests": [
	I0810 22:32:57.612952    4347 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61",
	I0810 22:32:57.612962    4347 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e"
	I0810 22:32:57.612966    4347 command_runner.go:124] >       ],
	I0810 22:32:57.612970    4347 command_runner.go:124] >       "size": "42585056",
	I0810 22:32:57.612975    4347 command_runner.go:124] >       "uid": null,
	I0810 22:32:57.612978    4347 command_runner.go:124] >       "username": "",
	I0810 22:32:57.612982    4347 command_runner.go:124] >       "spec": null
	I0810 22:32:57.612985    4347 command_runner.go:124] >     },
	I0810 22:32:57.612989    4347 command_runner.go:124] >     {
	I0810 22:32:57.612995    4347 command_runner.go:124] >       "id": "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934",
	I0810 22:32:57.613001    4347 command_runner.go:124] >       "repoTags": [
	I0810 22:32:57.613006    4347 command_runner.go:124] >         "k8s.gcr.io/etcd:3.4.13-0"
	I0810 22:32:57.613010    4347 command_runner.go:124] >       ],
	I0810 22:32:57.613014    4347 command_runner.go:124] >       "repoDigests": [
	I0810 22:32:57.613022    4347 command_runner.go:124] >         "k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2",
	I0810 22:32:57.613033    4347 command_runner.go:124] >         "k8s.gcr.io/etcd@sha256:bd4d2c9a19be8a492bc79df53eee199fd04b415e9993eb69f7718052602a147a"
	I0810 22:32:57.613041    4347 command_runner.go:124] >       ],
	I0810 22:32:57.613047    4347 command_runner.go:124] >       "size": "254662613",
	I0810 22:32:57.613065    4347 command_runner.go:124] >       "uid": null,
	I0810 22:32:57.613074    4347 command_runner.go:124] >       "username": "",
	I0810 22:32:57.613080    4347 command_runner.go:124] >       "spec": null
	I0810 22:32:57.613088    4347 command_runner.go:124] >     },
	I0810 22:32:57.613093    4347 command_runner.go:124] >     {
	I0810 22:32:57.613105    4347 command_runner.go:124] >       "id": "3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80",
	I0810 22:32:57.613114    4347 command_runner.go:124] >       "repoTags": [
	I0810 22:32:57.613121    4347 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver:v1.21.3"
	I0810 22:32:57.613130    4347 command_runner.go:124] >       ],
	I0810 22:32:57.613137    4347 command_runner.go:124] >       "repoDigests": [
	I0810 22:32:57.613153    4347 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2",
	I0810 22:32:57.613168    4347 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver@sha256:910cfdf034262c7b68ecb17c0885f39bdaaad07d87c9a5b6320819d8500b7ee5"
	I0810 22:32:57.613177    4347 command_runner.go:124] >       ],
	I0810 22:32:57.613183    4347 command_runner.go:124] >       "size": "126878961",
	I0810 22:32:57.613190    4347 command_runner.go:124] >       "uid": {
	I0810 22:32:57.613196    4347 command_runner.go:124] >         "value": "0"
	I0810 22:32:57.613202    4347 command_runner.go:124] >       },
	I0810 22:32:57.613208    4347 command_runner.go:124] >       "username": "",
	I0810 22:32:57.613217    4347 command_runner.go:124] >       "spec": null
	I0810 22:32:57.613222    4347 command_runner.go:124] >     },
	I0810 22:32:57.613227    4347 command_runner.go:124] >     {
	I0810 22:32:57.613238    4347 command_runner.go:124] >       "id": "bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9",
	I0810 22:32:57.613246    4347 command_runner.go:124] >       "repoTags": [
	I0810 22:32:57.613254    4347 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager:v1.21.3"
	I0810 22:32:57.613263    4347 command_runner.go:124] >       ],
	I0810 22:32:57.613269    4347 command_runner.go:124] >       "repoDigests": [
	I0810 22:32:57.613284    4347 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b",
	I0810 22:32:57.613298    4347 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager@sha256:7fb1f6614597c255b475ed8abf553e0d4e8ea211b06a90bed53eaddcfb9c354f"
	I0810 22:32:57.613306    4347 command_runner.go:124] >       ],
	I0810 22:32:57.613355    4347 command_runner.go:124] >       "size": "121087578",
	I0810 22:32:57.613371    4347 command_runner.go:124] >       "uid": {
	I0810 22:32:57.613377    4347 command_runner.go:124] >         "value": "0"
	I0810 22:32:57.613382    4347 command_runner.go:124] >       },
	I0810 22:32:57.613390    4347 command_runner.go:124] >       "username": "",
	I0810 22:32:57.613396    4347 command_runner.go:124] >       "spec": null
	I0810 22:32:57.613403    4347 command_runner.go:124] >     },
	I0810 22:32:57.613408    4347 command_runner.go:124] >     {
	I0810 22:32:57.613419    4347 command_runner.go:124] >       "id": "adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92",
	I0810 22:32:57.613428    4347 command_runner.go:124] >       "repoTags": [
	I0810 22:32:57.613439    4347 command_runner.go:124] >         "k8s.gcr.io/kube-proxy:v1.21.3"
	I0810 22:32:57.613447    4347 command_runner.go:124] >       ],
	I0810 22:32:57.613453    4347 command_runner.go:124] >       "repoDigests": [
	I0810 22:32:57.613466    4347 command_runner.go:124] >         "k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b",
	I0810 22:32:57.613480    4347 command_runner.go:124] >         "k8s.gcr.io/kube-proxy@sha256:c7778d7b97b2a822c3fe3e921d104ac42afbd38268de8df03557465780886627"
	I0810 22:32:57.613488    4347 command_runner.go:124] >       ],
	I0810 22:32:57.613495    4347 command_runner.go:124] >       "size": "105129702",
	I0810 22:32:57.613505    4347 command_runner.go:124] >       "uid": null,
	I0810 22:32:57.613513    4347 command_runner.go:124] >       "username": "",
	I0810 22:32:57.613520    4347 command_runner.go:124] >       "spec": null
	I0810 22:32:57.613525    4347 command_runner.go:124] >     },
	I0810 22:32:57.613531    4347 command_runner.go:124] >     {
	I0810 22:32:57.613541    4347 command_runner.go:124] >       "id": "6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a",
	I0810 22:32:57.613551    4347 command_runner.go:124] >       "repoTags": [
	I0810 22:32:57.613558    4347 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler:v1.21.3"
	I0810 22:32:57.613565    4347 command_runner.go:124] >       ],
	I0810 22:32:57.613572    4347 command_runner.go:124] >       "repoDigests": [
	I0810 22:32:57.613588    4347 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4",
	I0810 22:32:57.613605    4347 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler@sha256:b61779ea1bd936c137b25b3a7baa5551fbbd84fed8568d15c7c85ab1139521c0"
	I0810 22:32:57.613612    4347 command_runner.go:124] >       ],
	I0810 22:32:57.613619    4347 command_runner.go:124] >       "size": "51893338",
	I0810 22:32:57.613628    4347 command_runner.go:124] >       "uid": {
	I0810 22:32:57.613634    4347 command_runner.go:124] >         "value": "0"
	I0810 22:32:57.613644    4347 command_runner.go:124] >       },
	I0810 22:32:57.613652    4347 command_runner.go:124] >       "username": "",
	I0810 22:32:57.613658    4347 command_runner.go:124] >       "spec": null
	I0810 22:32:57.613669    4347 command_runner.go:124] >     },
	I0810 22:32:57.613680    4347 command_runner.go:124] >     {
	I0810 22:32:57.613690    4347 command_runner.go:124] >       "id": "0f8457a4c2ecaceac160805013dc3c61c63a1ff3dee74a473a36249a748e0253",
	I0810 22:32:57.613697    4347 command_runner.go:124] >       "repoTags": [
	I0810 22:32:57.613704    4347 command_runner.go:124] >         "k8s.gcr.io/pause:3.4.1"
	I0810 22:32:57.613710    4347 command_runner.go:124] >       ],
	I0810 22:32:57.613718    4347 command_runner.go:124] >       "repoDigests": [
	I0810 22:32:57.613729    4347 command_runner.go:124] >         "k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810",
	I0810 22:32:57.613744    4347 command_runner.go:124] >         "k8s.gcr.io/pause@sha256:914e745e524aa94315a25b49a7fafc0aa395e332126930593225d7a513f5a6b2"
	I0810 22:32:57.613751    4347 command_runner.go:124] >       ],
	I0810 22:32:57.613803    4347 command_runner.go:124] >       "size": "689817",
	I0810 22:32:57.613814    4347 command_runner.go:124] >       "uid": null,
	I0810 22:32:57.613818    4347 command_runner.go:124] >       "username": "",
	I0810 22:32:57.613825    4347 command_runner.go:124] >       "spec": null
	I0810 22:32:57.613829    4347 command_runner.go:124] >     }
	I0810 22:32:57.613832    4347 command_runner.go:124] >   ]
	I0810 22:32:57.613835    4347 command_runner.go:124] > }
	I0810 22:32:57.614037    4347 crio.go:424] all images are preloaded for cri-o runtime.
	I0810 22:32:57.614054    4347 crio.go:333] Images already preloaded, skipping extraction
	I0810 22:32:57.614104    4347 ssh_runner.go:149] Run: sudo crictl images --output json
	I0810 22:32:57.650955    4347 command_runner.go:124] > {
	I0810 22:32:57.650980    4347 command_runner.go:124] >   "images": [
	I0810 22:32:57.650984    4347 command_runner.go:124] >     {
	I0810 22:32:57.650993    4347 command_runner.go:124] >       "id": "6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb",
	I0810 22:32:57.650998    4347 command_runner.go:124] >       "repoTags": [
	I0810 22:32:57.651004    4347 command_runner.go:124] >         "docker.io/kindest/kindnetd:v20210326-1e038dc5"
	I0810 22:32:57.651008    4347 command_runner.go:124] >       ],
	I0810 22:32:57.651014    4347 command_runner.go:124] >       "repoDigests": [
	I0810 22:32:57.651028    4347 command_runner.go:124] >         "docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1",
	I0810 22:32:57.651045    4347 command_runner.go:124] >         "docker.io/kindest/kindnetd@sha256:838bc1706e38391aefaa31fd52619fe8e57ad3dfb0d0ff414d902367fcc24c3c"
	I0810 22:32:57.651053    4347 command_runner.go:124] >       ],
	I0810 22:32:57.651060    4347 command_runner.go:124] >       "size": "119984626",
	I0810 22:32:57.651068    4347 command_runner.go:124] >       "uid": null,
	I0810 22:32:57.651072    4347 command_runner.go:124] >       "username": "",
	I0810 22:32:57.651077    4347 command_runner.go:124] >       "spec": null
	I0810 22:32:57.651093    4347 command_runner.go:124] >     },
	I0810 22:32:57.651099    4347 command_runner.go:124] >     {
	I0810 22:32:57.651106    4347 command_runner.go:124] >       "id": "9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db",
	I0810 22:32:57.651113    4347 command_runner.go:124] >       "repoTags": [
	I0810 22:32:57.651122    4347 command_runner.go:124] >         "docker.io/kubernetesui/dashboard:v2.1.0"
	I0810 22:32:57.651128    4347 command_runner.go:124] >       ],
	I0810 22:32:57.651136    4347 command_runner.go:124] >       "repoDigests": [
	I0810 22:32:57.651154    4347 command_runner.go:124] >         "docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f",
	I0810 22:32:57.651170    4347 command_runner.go:124] >         "docker.io/kubernetesui/dashboard@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6"
	I0810 22:32:57.651177    4347 command_runner.go:124] >       ],
	I0810 22:32:57.651183    4347 command_runner.go:124] >       "size": "228528983",
	I0810 22:32:57.651188    4347 command_runner.go:124] >       "uid": null,
	I0810 22:32:57.651193    4347 command_runner.go:124] >       "username": "nonroot",
	I0810 22:32:57.651209    4347 command_runner.go:124] >       "spec": null
	I0810 22:32:57.651218    4347 command_runner.go:124] >     },
	I0810 22:32:57.651223    4347 command_runner.go:124] >     {
	I0810 22:32:57.651235    4347 command_runner.go:124] >       "id": "86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4",
	I0810 22:32:57.651242    4347 command_runner.go:124] >       "repoTags": [
	I0810 22:32:57.651251    4347 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper:v1.0.4"
	I0810 22:32:57.651257    4347 command_runner.go:124] >       ],
	I0810 22:32:57.651265    4347 command_runner.go:124] >       "repoDigests": [
	I0810 22:32:57.651278    4347 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper@sha256:555981a24f184420f3be0c79d4efb6c948a85cfce84034f85a563f4151a81cbf",
	I0810 22:32:57.651291    4347 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper@sha256:d78f995c07124874c2a2e9b404cffa6bc6233668d63d6c6210574971f3d5914b"
	I0810 22:32:57.651297    4347 command_runner.go:124] >       ],
	I0810 22:32:57.651304    4347 command_runner.go:124] >       "size": "36950651",
	I0810 22:32:57.651310    4347 command_runner.go:124] >       "uid": null,
	I0810 22:32:57.651318    4347 command_runner.go:124] >       "username": "",
	I0810 22:32:57.651336    4347 command_runner.go:124] >       "spec": null
	I0810 22:32:57.651344    4347 command_runner.go:124] >     },
	I0810 22:32:57.651350    4347 command_runner.go:124] >     {
	I0810 22:32:57.651362    4347 command_runner.go:124] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0810 22:32:57.651368    4347 command_runner.go:124] >       "repoTags": [
	I0810 22:32:57.651375    4347 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0810 22:32:57.651379    4347 command_runner.go:124] >       ],
	I0810 22:32:57.651387    4347 command_runner.go:124] >       "repoDigests": [
	I0810 22:32:57.651403    4347 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0810 22:32:57.651419    4347 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0810 22:32:57.651426    4347 command_runner.go:124] >       ],
	I0810 22:32:57.651433    4347 command_runner.go:124] >       "size": "31470524",
	I0810 22:32:57.651449    4347 command_runner.go:124] >       "uid": null,
	I0810 22:32:57.651458    4347 command_runner.go:124] >       "username": "",
	I0810 22:32:57.651463    4347 command_runner.go:124] >       "spec": null
	I0810 22:32:57.651467    4347 command_runner.go:124] >     },
	I0810 22:32:57.651472    4347 command_runner.go:124] >     {
	I0810 22:32:57.651482    4347 command_runner.go:124] >       "id": "296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899",
	I0810 22:32:57.651493    4347 command_runner.go:124] >       "repoTags": [
	I0810 22:32:57.651503    4347 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns:v1.8.0"
	I0810 22:32:57.651512    4347 command_runner.go:124] >       ],
	I0810 22:32:57.651518    4347 command_runner.go:124] >       "repoDigests": [
	I0810 22:32:57.651531    4347 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61",
	I0810 22:32:57.651544    4347 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e"
	I0810 22:32:57.651550    4347 command_runner.go:124] >       ],
	I0810 22:32:57.651556    4347 command_runner.go:124] >       "size": "42585056",
	I0810 22:32:57.651560    4347 command_runner.go:124] >       "uid": null,
	I0810 22:32:57.651566    4347 command_runner.go:124] >       "username": "",
	I0810 22:32:57.651573    4347 command_runner.go:124] >       "spec": null
	I0810 22:32:57.651578    4347 command_runner.go:124] >     },
	I0810 22:32:57.651585    4347 command_runner.go:124] >     {
	I0810 22:32:57.651595    4347 command_runner.go:124] >       "id": "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934",
	I0810 22:32:57.651608    4347 command_runner.go:124] >       "repoTags": [
	I0810 22:32:57.651615    4347 command_runner.go:124] >         "k8s.gcr.io/etcd:3.4.13-0"
	I0810 22:32:57.651621    4347 command_runner.go:124] >       ],
	I0810 22:32:57.651628    4347 command_runner.go:124] >       "repoDigests": [
	I0810 22:32:57.651639    4347 command_runner.go:124] >         "k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2",
	I0810 22:32:57.651650    4347 command_runner.go:124] >         "k8s.gcr.io/etcd@sha256:bd4d2c9a19be8a492bc79df53eee199fd04b415e9993eb69f7718052602a147a"
	I0810 22:32:57.651655    4347 command_runner.go:124] >       ],
	I0810 22:32:57.651663    4347 command_runner.go:124] >       "size": "254662613",
	I0810 22:32:57.651669    4347 command_runner.go:124] >       "uid": null,
	I0810 22:32:57.651676    4347 command_runner.go:124] >       "username": "",
	I0810 22:32:57.651682    4347 command_runner.go:124] >       "spec": null
	I0810 22:32:57.651687    4347 command_runner.go:124] >     },
	I0810 22:32:57.651694    4347 command_runner.go:124] >     {
	I0810 22:32:57.651704    4347 command_runner.go:124] >       "id": "3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80",
	I0810 22:32:57.651714    4347 command_runner.go:124] >       "repoTags": [
	I0810 22:32:57.651722    4347 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver:v1.21.3"
	I0810 22:32:57.651731    4347 command_runner.go:124] >       ],
	I0810 22:32:57.651736    4347 command_runner.go:124] >       "repoDigests": [
	I0810 22:32:57.651749    4347 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2",
	I0810 22:32:57.651767    4347 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver@sha256:910cfdf034262c7b68ecb17c0885f39bdaaad07d87c9a5b6320819d8500b7ee5"
	I0810 22:32:57.651776    4347 command_runner.go:124] >       ],
	I0810 22:32:57.651783    4347 command_runner.go:124] >       "size": "126878961",
	I0810 22:32:57.651790    4347 command_runner.go:124] >       "uid": {
	I0810 22:32:57.651796    4347 command_runner.go:124] >         "value": "0"
	I0810 22:32:57.651803    4347 command_runner.go:124] >       },
	I0810 22:32:57.651809    4347 command_runner.go:124] >       "username": "",
	I0810 22:32:57.651816    4347 command_runner.go:124] >       "spec": null
	I0810 22:32:57.651821    4347 command_runner.go:124] >     },
	I0810 22:32:57.651827    4347 command_runner.go:124] >     {
	I0810 22:32:57.651834    4347 command_runner.go:124] >       "id": "bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9",
	I0810 22:32:57.651842    4347 command_runner.go:124] >       "repoTags": [
	I0810 22:32:57.651850    4347 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager:v1.21.3"
	I0810 22:32:57.651858    4347 command_runner.go:124] >       ],
	I0810 22:32:57.651864    4347 command_runner.go:124] >       "repoDigests": [
	I0810 22:32:57.651877    4347 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b",
	I0810 22:32:57.651893    4347 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager@sha256:7fb1f6614597c255b475ed8abf553e0d4e8ea211b06a90bed53eaddcfb9c354f"
	I0810 22:32:57.651901    4347 command_runner.go:124] >       ],
	I0810 22:32:57.651911    4347 command_runner.go:124] >       "size": "121087578",
	I0810 22:32:57.651919    4347 command_runner.go:124] >       "uid": {
	I0810 22:32:57.651927    4347 command_runner.go:124] >         "value": "0"
	I0810 22:32:57.651934    4347 command_runner.go:124] >       },
	I0810 22:32:57.651954    4347 command_runner.go:124] >       "username": "",
	I0810 22:32:57.651964    4347 command_runner.go:124] >       "spec": null
	I0810 22:32:57.651970    4347 command_runner.go:124] >     },
	I0810 22:32:57.651975    4347 command_runner.go:124] >     {
	I0810 22:32:57.651987    4347 command_runner.go:124] >       "id": "adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92",
	I0810 22:32:57.651994    4347 command_runner.go:124] >       "repoTags": [
	I0810 22:32:57.652002    4347 command_runner.go:124] >         "k8s.gcr.io/kube-proxy:v1.21.3"
	I0810 22:32:57.652007    4347 command_runner.go:124] >       ],
	I0810 22:32:57.652012    4347 command_runner.go:124] >       "repoDigests": [
	I0810 22:32:57.652024    4347 command_runner.go:124] >         "k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b",
	I0810 22:32:57.652038    4347 command_runner.go:124] >         "k8s.gcr.io/kube-proxy@sha256:c7778d7b97b2a822c3fe3e921d104ac42afbd38268de8df03557465780886627"
	I0810 22:32:57.652045    4347 command_runner.go:124] >       ],
	I0810 22:32:57.652052    4347 command_runner.go:124] >       "size": "105129702",
	I0810 22:32:57.652059    4347 command_runner.go:124] >       "uid": null,
	I0810 22:32:57.652066    4347 command_runner.go:124] >       "username": "",
	I0810 22:32:57.652073    4347 command_runner.go:124] >       "spec": null
	I0810 22:32:57.652078    4347 command_runner.go:124] >     },
	I0810 22:32:57.652087    4347 command_runner.go:124] >     {
	I0810 22:32:57.652099    4347 command_runner.go:124] >       "id": "6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a",
	I0810 22:32:57.652104    4347 command_runner.go:124] >       "repoTags": [
	I0810 22:32:57.652129    4347 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler:v1.21.3"
	I0810 22:32:57.652136    4347 command_runner.go:124] >       ],
	I0810 22:32:57.652150    4347 command_runner.go:124] >       "repoDigests": [
	I0810 22:32:57.652166    4347 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4",
	I0810 22:32:57.652179    4347 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler@sha256:b61779ea1bd936c137b25b3a7baa5551fbbd84fed8568d15c7c85ab1139521c0"
	I0810 22:32:57.652186    4347 command_runner.go:124] >       ],
	I0810 22:32:57.652191    4347 command_runner.go:124] >       "size": "51893338",
	I0810 22:32:57.652195    4347 command_runner.go:124] >       "uid": {
	I0810 22:32:57.652201    4347 command_runner.go:124] >         "value": "0"
	I0810 22:32:57.652208    4347 command_runner.go:124] >       },
	I0810 22:32:57.652214    4347 command_runner.go:124] >       "username": "",
	I0810 22:32:57.652222    4347 command_runner.go:124] >       "spec": null
	I0810 22:32:57.652228    4347 command_runner.go:124] >     },
	I0810 22:32:57.652234    4347 command_runner.go:124] >     {
	I0810 22:32:57.652244    4347 command_runner.go:124] >       "id": "0f8457a4c2ecaceac160805013dc3c61c63a1ff3dee74a473a36249a748e0253",
	I0810 22:32:57.652251    4347 command_runner.go:124] >       "repoTags": [
	I0810 22:32:57.652258    4347 command_runner.go:124] >         "k8s.gcr.io/pause:3.4.1"
	I0810 22:32:57.652268    4347 command_runner.go:124] >       ],
	I0810 22:32:57.652275    4347 command_runner.go:124] >       "repoDigests": [
	I0810 22:32:57.652283    4347 command_runner.go:124] >         "k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810",
	I0810 22:32:57.652296    4347 command_runner.go:124] >         "k8s.gcr.io/pause@sha256:914e745e524aa94315a25b49a7fafc0aa395e332126930593225d7a513f5a6b2"
	I0810 22:32:57.652302    4347 command_runner.go:124] >       ],
	I0810 22:32:57.652309    4347 command_runner.go:124] >       "size": "689817",
	I0810 22:32:57.652316    4347 command_runner.go:124] >       "uid": null,
	I0810 22:32:57.652323    4347 command_runner.go:124] >       "username": "",
	I0810 22:32:57.652329    4347 command_runner.go:124] >       "spec": null
	I0810 22:32:57.652335    4347 command_runner.go:124] >     }
	I0810 22:32:57.652340    4347 command_runner.go:124] >   ]
	I0810 22:32:57.652346    4347 command_runner.go:124] > }
	I0810 22:32:57.653173    4347 crio.go:424] all images are preloaded for cri-o runtime.
	I0810 22:32:57.653195    4347 cache_images.go:74] Images are preloaded, skipping loading
	I0810 22:32:57.653286    4347 ssh_runner.go:149] Run: crio config
	I0810 22:32:57.851973    4347 command_runner.go:124] ! time="2021-08-10T22:32:57Z" level=info msg="Starting CRI-O, version: 1.20.2, git: d5a999ad0a35d895ded554e1e18c142075501a98(clean)"
	I0810 22:32:57.855728    4347 command_runner.go:124] ! time="2021-08-10T22:32:57Z" level=warning msg="The 'registries' option in crio.conf(5) (referenced in \"/etc/crio/crio.conf\") has been deprecated and will be removed with CRI-O 1.21."
	I0810 22:32:57.855771    4347 command_runner.go:124] ! time="2021-08-10T22:32:57Z" level=warning msg="Please refer to containers-registries.conf(5) for configuring unqualified-search registries."
	I0810 22:32:57.858116    4347 command_runner.go:124] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0810 22:32:57.862859    4347 command_runner.go:124] > # The CRI-O configuration file specifies all of the available configuration
	I0810 22:32:57.862882    4347 command_runner.go:124] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0810 22:32:57.862893    4347 command_runner.go:124] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0810 22:32:57.862898    4347 command_runner.go:124] > #
	I0810 22:32:57.862906    4347 command_runner.go:124] > # Please refer to crio.conf(5) for details of all configuration options.
	I0810 22:32:57.862914    4347 command_runner.go:124] > # CRI-O supports partial configuration reload during runtime, which can be
	I0810 22:32:57.862920    4347 command_runner.go:124] > # done by sending SIGHUP to the running process. Currently supported options
	I0810 22:32:57.862929    4347 command_runner.go:124] > # are explicitly mentioned with: 'This option supports live configuration
	I0810 22:32:57.862934    4347 command_runner.go:124] > # reload'.
	I0810 22:32:57.862941    4347 command_runner.go:124] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0810 22:32:57.862990    4347 command_runner.go:124] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0810 22:32:57.863005    4347 command_runner.go:124] > # you want to change the system's defaults. If you want to modify storage just
	I0810 22:32:57.863019    4347 command_runner.go:124] > # for CRI-O, you can change the storage configuration options here.
	I0810 22:32:57.863023    4347 command_runner.go:124] > [crio]
	I0810 22:32:57.863037    4347 command_runner.go:124] > # Path to the "root directory". CRI-O stores all of its data, including
	I0810 22:32:57.863073    4347 command_runner.go:124] > # containers images, in this directory.
	I0810 22:32:57.863089    4347 command_runner.go:124] > #root = "/var/lib/containers/storage"
	I0810 22:32:57.863112    4347 command_runner.go:124] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0810 22:32:57.863126    4347 command_runner.go:124] > #runroot = "/var/run/containers/storage"
	I0810 22:32:57.863153    4347 command_runner.go:124] > # Storage driver used to manage the storage of images and containers. Please
	I0810 22:32:57.863166    4347 command_runner.go:124] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0810 22:32:57.863176    4347 command_runner.go:124] > #storage_driver = "overlay"
	I0810 22:32:57.863187    4347 command_runner.go:124] > # List to pass options to the storage driver. Please refer to
	I0810 22:32:57.863199    4347 command_runner.go:124] > # containers-storage.conf(5) to see all available storage options.
	I0810 22:32:57.863206    4347 command_runner.go:124] > #storage_option = [
	I0810 22:32:57.863210    4347 command_runner.go:124] > #]
	I0810 22:32:57.863221    4347 command_runner.go:124] > # The default log directory where all logs will go unless directly specified by
	I0810 22:32:57.863234    4347 command_runner.go:124] > # the kubelet. The log directory specified must be an absolute directory.
	I0810 22:32:57.863245    4347 command_runner.go:124] > log_dir = "/var/log/crio/pods"
	I0810 22:32:57.863255    4347 command_runner.go:124] > # Location for CRI-O to lay down the temporary version file.
	I0810 22:32:57.863268    4347 command_runner.go:124] > # It is used to check if crio wipe should wipe containers, which should
	I0810 22:32:57.863278    4347 command_runner.go:124] > # always happen on a node reboot
	I0810 22:32:57.863287    4347 command_runner.go:124] > version_file = "/var/run/crio/version"
	I0810 22:32:57.863297    4347 command_runner.go:124] > # Location for CRI-O to lay down the persistent version file.
	I0810 22:32:57.863305    4347 command_runner.go:124] > # It is used to check if crio wipe should wipe images, which should
	I0810 22:32:57.863314    4347 command_runner.go:124] > # only happen when CRI-O has been upgraded
	I0810 22:32:57.863329    4347 command_runner.go:124] > version_file_persist = "/var/lib/crio/version"
	I0810 22:32:57.863343    4347 command_runner.go:124] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0810 22:32:57.863351    4347 command_runner.go:124] > [crio.api]
	I0810 22:32:57.863360    4347 command_runner.go:124] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0810 22:32:57.863370    4347 command_runner.go:124] > listen = "/var/run/crio/crio.sock"
	I0810 22:32:57.863378    4347 command_runner.go:124] > # IP address on which the stream server will listen.
	I0810 22:32:57.863396    4347 command_runner.go:124] > stream_address = "127.0.0.1"
	I0810 22:32:57.863410    4347 command_runner.go:124] > # The port on which the stream server will listen. If the port is set to "0", then
	I0810 22:32:57.863422    4347 command_runner.go:124] > # CRI-O will allocate a random free port number.
	I0810 22:32:57.863432    4347 command_runner.go:124] > stream_port = "0"
	I0810 22:32:57.863442    4347 command_runner.go:124] > # Enable encrypted TLS transport of the stream server.
	I0810 22:32:57.863450    4347 command_runner.go:124] > stream_enable_tls = false
	I0810 22:32:57.863459    4347 command_runner.go:124] > # Length of time until open streams terminate due to lack of activity
	I0810 22:32:57.863468    4347 command_runner.go:124] > stream_idle_timeout = ""
	I0810 22:32:57.863478    4347 command_runner.go:124] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0810 22:32:57.863489    4347 command_runner.go:124] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0810 22:32:57.863498    4347 command_runner.go:124] > # minutes.
	I0810 22:32:57.863504    4347 command_runner.go:124] > stream_tls_cert = ""
	I0810 22:32:57.863517    4347 command_runner.go:124] > # Path to the key file used to serve the encrypted stream. This file can
	I0810 22:32:57.863530    4347 command_runner.go:124] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0810 22:32:57.863539    4347 command_runner.go:124] > stream_tls_key = ""
	I0810 22:32:57.863549    4347 command_runner.go:124] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0810 22:32:57.863562    4347 command_runner.go:124] > # communication with the encrypted stream. This file can change and CRI-O will
	I0810 22:32:57.863571    4347 command_runner.go:124] > # automatically pick up the changes within 5 minutes.
	I0810 22:32:57.863576    4347 command_runner.go:124] > stream_tls_ca = ""
	I0810 22:32:57.863588    4347 command_runner.go:124] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0810 22:32:57.863599    4347 command_runner.go:124] > grpc_max_send_msg_size = 16777216
	I0810 22:32:57.863613    4347 command_runner.go:124] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0810 22:32:57.863624    4347 command_runner.go:124] > grpc_max_recv_msg_size = 16777216
	I0810 22:32:57.863635    4347 command_runner.go:124] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0810 22:32:57.863644    4347 command_runner.go:124] > # and options for how to set up and manage the OCI runtime.
	I0810 22:32:57.863653    4347 command_runner.go:124] > [crio.runtime]
	I0810 22:32:57.863661    4347 command_runner.go:124] > # A list of ulimits to be set in containers by default, specified as
	I0810 22:32:57.863671    4347 command_runner.go:124] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0810 22:32:57.863680    4347 command_runner.go:124] > # "nofile=1024:2048"
	I0810 22:32:57.863690    4347 command_runner.go:124] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0810 22:32:57.863697    4347 command_runner.go:124] > #default_ulimits = [
	I0810 22:32:57.863702    4347 command_runner.go:124] > #]
	I0810 22:32:57.863713    4347 command_runner.go:124] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0810 22:32:57.863723    4347 command_runner.go:124] > no_pivot = false
	I0810 22:32:57.863732    4347 command_runner.go:124] > # decryption_keys_path is the path where the keys required for
	I0810 22:32:57.863758    4347 command_runner.go:124] > # image decryption are stored. This option supports live configuration reload.
	I0810 22:32:57.863769    4347 command_runner.go:124] > decryption_keys_path = "/etc/crio/keys/"
	I0810 22:32:57.863780    4347 command_runner.go:124] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0810 22:32:57.863796    4347 command_runner.go:124] > # Will be searched for using $PATH if empty.
	I0810 22:32:57.863805    4347 command_runner.go:124] > conmon = "/usr/libexec/crio/conmon"
	I0810 22:32:57.863812    4347 command_runner.go:124] > # Cgroup setting for conmon
	I0810 22:32:57.863819    4347 command_runner.go:124] > conmon_cgroup = "system.slice"
	I0810 22:32:57.863832    4347 command_runner.go:124] > # Environment variable list for the conmon process, used for passing necessary
	I0810 22:32:57.863841    4347 command_runner.go:124] > # environment variables to conmon or the runtime.
	I0810 22:32:57.863845    4347 command_runner.go:124] > conmon_env = [
	I0810 22:32:57.863854    4347 command_runner.go:124] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0810 22:32:57.863862    4347 command_runner.go:124] > ]
	I0810 22:32:57.863871    4347 command_runner.go:124] > # Additional environment variables to set for all the
	I0810 22:32:57.863882    4347 command_runner.go:124] > # containers. These are overridden if set in the
	I0810 22:32:57.863893    4347 command_runner.go:124] > # container image spec or in the container runtime configuration.
	I0810 22:32:57.863899    4347 command_runner.go:124] > default_env = [
	I0810 22:32:57.863906    4347 command_runner.go:124] > ]
	I0810 22:32:57.863916    4347 command_runner.go:124] > # If true, SELinux will be used for pod separation on the host.
	I0810 22:32:57.863924    4347 command_runner.go:124] > selinux = false
	I0810 22:32:57.863933    4347 command_runner.go:124] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0810 22:32:57.863946    4347 command_runner.go:124] > # for the runtime. If not specified, then the internal default seccomp profile
	I0810 22:32:57.863959    4347 command_runner.go:124] > # will be used. This option supports live configuration reload.
	I0810 22:32:57.863970    4347 command_runner.go:124] > seccomp_profile = ""
	I0810 22:32:57.863980    4347 command_runner.go:124] > # Changes the meaning of an empty seccomp profile. By default
	I0810 22:32:57.863991    4347 command_runner.go:124] > # (and according to CRI spec), an empty profile means unconfined.
	I0810 22:32:57.864004    4347 command_runner.go:124] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0810 22:32:57.864013    4347 command_runner.go:124] > # which might increase security.
	I0810 22:32:57.864021    4347 command_runner.go:124] > seccomp_use_default_when_empty = false
	I0810 22:32:57.864028    4347 command_runner.go:124] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0810 22:32:57.864041    4347 command_runner.go:124] > # profile name is "crio-default". This profile only takes effect if the user
	I0810 22:32:57.864052    4347 command_runner.go:124] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0810 22:32:57.864065    4347 command_runner.go:124] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0810 22:32:57.864073    4347 command_runner.go:124] > # This option supports live configuration reload.
	I0810 22:32:57.864083    4347 command_runner.go:124] > apparmor_profile = "crio-default"
	I0810 22:32:57.864094    4347 command_runner.go:124] > # Used to change irqbalance service config file path which is used for configuring
	I0810 22:32:57.864103    4347 command_runner.go:124] > # irqbalance daemon.
	I0810 22:32:57.864111    4347 command_runner.go:124] > irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0810 22:32:57.864138    4347 command_runner.go:124] > # Cgroup management implementation used for the runtime.
	I0810 22:32:57.864151    4347 command_runner.go:124] > cgroup_manager = "systemd"
	I0810 22:32:57.864163    4347 command_runner.go:124] > # Specify whether the image pull must be performed in a separate cgroup.
	I0810 22:32:57.864173    4347 command_runner.go:124] > separate_pull_cgroup = ""
	I0810 22:32:57.864184    4347 command_runner.go:124] > # List of default capabilities for containers. If it is empty or commented out,
	I0810 22:32:57.864200    4347 command_runner.go:124] > # only the capabilities defined in the containers json file by the user/kube
	I0810 22:32:57.864207    4347 command_runner.go:124] > # will be added.
	I0810 22:32:57.864212    4347 command_runner.go:124] > default_capabilities = [
	I0810 22:32:57.864219    4347 command_runner.go:124] > 	"CHOWN",
	I0810 22:32:57.864225    4347 command_runner.go:124] > 	"DAC_OVERRIDE",
	I0810 22:32:57.864232    4347 command_runner.go:124] > 	"FSETID",
	I0810 22:32:57.864238    4347 command_runner.go:124] > 	"FOWNER",
	I0810 22:32:57.864245    4347 command_runner.go:124] > 	"SETGID",
	I0810 22:32:57.864250    4347 command_runner.go:124] > 	"SETUID",
	I0810 22:32:57.864257    4347 command_runner.go:124] > 	"SETPCAP",
	I0810 22:32:57.864264    4347 command_runner.go:124] > 	"NET_BIND_SERVICE",
	I0810 22:32:57.864272    4347 command_runner.go:124] > 	"KILL",
	I0810 22:32:57.864278    4347 command_runner.go:124] > ]
	I0810 22:32:57.864291    4347 command_runner.go:124] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0810 22:32:57.864303    4347 command_runner.go:124] > # defined in the container json file by the user/kube will be added.
	I0810 22:32:57.864313    4347 command_runner.go:124] > default_sysctls = [
	I0810 22:32:57.864319    4347 command_runner.go:124] > ]
	I0810 22:32:57.864327    4347 command_runner.go:124] > # List of additional devices. specified as
	I0810 22:32:57.864342    4347 command_runner.go:124] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0810 22:32:57.864353    4347 command_runner.go:124] > #If it is empty or commented out, only the devices
	I0810 22:32:57.864365    4347 command_runner.go:124] > # defined in the container json file by the user/kube will be added.
	I0810 22:32:57.864374    4347 command_runner.go:124] > additional_devices = [
	I0810 22:32:57.864379    4347 command_runner.go:124] > ]
	I0810 22:32:57.864387    4347 command_runner.go:124] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0810 22:32:57.864398    4347 command_runner.go:124] > # directories does not exist, then CRI-O will automatically skip them.
	I0810 22:32:57.864404    4347 command_runner.go:124] > hooks_dir = [
	I0810 22:32:57.864412    4347 command_runner.go:124] > 	"/usr/share/containers/oci/hooks.d",
	I0810 22:32:57.864419    4347 command_runner.go:124] > ]
	I0810 22:32:57.864430    4347 command_runner.go:124] > # Path to the file specifying the defaults mounts for each container. The
	I0810 22:32:57.864443    4347 command_runner.go:124] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0810 22:32:57.864454    4347 command_runner.go:124] > # its default mounts from the following two files:
	I0810 22:32:57.864462    4347 command_runner.go:124] > #
	I0810 22:32:57.864472    4347 command_runner.go:124] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0810 22:32:57.864482    4347 command_runner.go:124] > #      override file, where users can either add in their own default mounts, or
	I0810 22:32:57.864491    4347 command_runner.go:124] > #      override the default mounts shipped with the package.
	I0810 22:32:57.864499    4347 command_runner.go:124] > #
	I0810 22:32:57.864509    4347 command_runner.go:124] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0810 22:32:57.864523    4347 command_runner.go:124] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0810 22:32:57.864537    4347 command_runner.go:124] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0810 22:32:57.864548    4347 command_runner.go:124] > #      only add mounts it finds in this file.
	I0810 22:32:57.864555    4347 command_runner.go:124] > #
	I0810 22:32:57.864562    4347 command_runner.go:124] > #default_mounts_file = ""
	I0810 22:32:57.864570    4347 command_runner.go:124] > # Maximum number of processes allowed in a container.
	I0810 22:32:57.864579    4347 command_runner.go:124] > pids_limit = 1024
	I0810 22:32:57.864593    4347 command_runner.go:124] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0810 22:32:57.864607    4347 command_runner.go:124] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0810 22:32:57.864621    4347 command_runner.go:124] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0810 22:32:57.864634    4347 command_runner.go:124] > # limit is never exceeded.
	I0810 22:32:57.864643    4347 command_runner.go:124] > log_size_max = -1
	I0810 22:32:57.864673    4347 command_runner.go:124] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0810 22:32:57.864705    4347 command_runner.go:124] > log_to_journald = false
	I0810 22:32:57.864715    4347 command_runner.go:124] > # Path to directory in which container exit files are written to by conmon.
	I0810 22:32:57.864723    4347 command_runner.go:124] > container_exits_dir = "/var/run/crio/exits"
	I0810 22:32:57.864732    4347 command_runner.go:124] > # Path to directory for container attach sockets.
	I0810 22:32:57.864740    4347 command_runner.go:124] > container_attach_socket_dir = "/var/run/crio"
	I0810 22:32:57.864749    4347 command_runner.go:124] > # The prefix to use for the source of the bind mounts.
	I0810 22:32:57.864757    4347 command_runner.go:124] > bind_mount_prefix = ""
	I0810 22:32:57.864770    4347 command_runner.go:124] > # If set to true, all containers will run in read-only mode.
	I0810 22:32:57.864779    4347 command_runner.go:124] > read_only = false
	I0810 22:32:57.864789    4347 command_runner.go:124] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0810 22:32:57.864803    4347 command_runner.go:124] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0810 22:32:57.864814    4347 command_runner.go:124] > # live configuration reload.
	I0810 22:32:57.864822    4347 command_runner.go:124] > log_level = "info"
	I0810 22:32:57.864832    4347 command_runner.go:124] > # Filter the log messages by the provided regular expression.
	I0810 22:32:57.864841    4347 command_runner.go:124] > # This option supports live configuration reload.
	I0810 22:32:57.864846    4347 command_runner.go:124] > log_filter = ""
	I0810 22:32:57.864859    4347 command_runner.go:124] > # The UID mappings for the user namespace of each container. A range is
	I0810 22:32:57.864873    4347 command_runner.go:124] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0810 22:32:57.864883    4347 command_runner.go:124] > # separated by comma.
	I0810 22:32:57.864892    4347 command_runner.go:124] > uid_mappings = ""
	I0810 22:32:57.864902    4347 command_runner.go:124] > # The GID mappings for the user namespace of each container. A range is
	I0810 22:32:57.864915    4347 command_runner.go:124] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0810 22:32:57.864924    4347 command_runner.go:124] > # separated by comma.
	I0810 22:32:57.864930    4347 command_runner.go:124] > gid_mappings = ""
	I0810 22:32:57.864939    4347 command_runner.go:124] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0810 22:32:57.864953    4347 command_runner.go:124] > # regarding the proper termination of the container. The lowest possible
	I0810 22:32:57.864967    4347 command_runner.go:124] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0810 22:32:57.864976    4347 command_runner.go:124] > ctr_stop_timeout = 30
	I0810 22:32:57.864986    4347 command_runner.go:124] > # manage_ns_lifecycle determines whether we pin and remove namespaces
	I0810 22:32:57.864996    4347 command_runner.go:124] > # and manage their lifecycle.
	I0810 22:32:57.865006    4347 command_runner.go:124] > # This option is being deprecated, and will be unconditionally true in the future.
	I0810 22:32:57.865014    4347 command_runner.go:124] > manage_ns_lifecycle = true
	I0810 22:32:57.865021    4347 command_runner.go:124] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0810 22:32:57.865036    4347 command_runner.go:124] > # when a pod does not have a private PID namespace, and does not use
	I0810 22:32:57.865047    4347 command_runner.go:124] > # a kernel separating runtime (like kata).
	I0810 22:32:57.865055    4347 command_runner.go:124] > # It requires manage_ns_lifecycle to be true.
	I0810 22:32:57.865065    4347 command_runner.go:124] > drop_infra_ctr = false
	I0810 22:32:57.865075    4347 command_runner.go:124] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0810 22:32:57.865087    4347 command_runner.go:124] > # You can use linux CPU list format to specify desired CPUs.
	I0810 22:32:57.865098    4347 command_runner.go:124] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0810 22:32:57.865104    4347 command_runner.go:124] > # infra_ctr_cpuset = ""
	I0810 22:32:57.865110    4347 command_runner.go:124] > # The directory where the state of the managed namespaces gets tracked.
	I0810 22:32:57.865118    4347 command_runner.go:124] > # Only used when manage_ns_lifecycle is true.
	I0810 22:32:57.865124    4347 command_runner.go:124] > namespaces_dir = "/var/run"
	I0810 22:32:57.865136    4347 command_runner.go:124] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0810 22:32:57.865147    4347 command_runner.go:124] > pinns_path = "/usr/bin/pinns"
	I0810 22:32:57.865157    4347 command_runner.go:124] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0810 22:32:57.865167    4347 command_runner.go:124] > # The name is matched against the runtimes map below. If this value is changed,
	I0810 22:32:57.865177    4347 command_runner.go:124] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0810 22:32:57.865185    4347 command_runner.go:124] > default_runtime = "runc"
	I0810 22:32:57.865194    4347 command_runner.go:124] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0810 22:32:57.865206    4347 command_runner.go:124] > # The runtime to use is picked based on the runtime_handler provided by the CRI.
	I0810 22:32:57.865220    4347 command_runner.go:124] > # If no runtime_handler is provided, the runtime will be picked based on the level
	I0810 22:32:57.865233    4347 command_runner.go:124] > # of trust of the workload. Each entry in the table should follow the format:
	I0810 22:32:57.865242    4347 command_runner.go:124] > #
	I0810 22:32:57.865251    4347 command_runner.go:124] > #[crio.runtime.runtimes.runtime-handler]
	I0810 22:32:57.865261    4347 command_runner.go:124] > #  runtime_path = "/path/to/the/executable"
	I0810 22:32:57.865271    4347 command_runner.go:124] > #  runtime_type = "oci"
	I0810 22:32:57.865279    4347 command_runner.go:124] > #  runtime_root = "/path/to/the/root"
	I0810 22:32:57.865285    4347 command_runner.go:124] > #  privileged_without_host_devices = false
	I0810 22:32:57.865292    4347 command_runner.go:124] > #  allowed_annotations = []
	I0810 22:32:57.865301    4347 command_runner.go:124] > # Where:
	I0810 22:32:57.865310    4347 command_runner.go:124] > # - runtime-handler: name used to identify the runtime
	I0810 22:32:57.865325    4347 command_runner.go:124] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0810 22:32:57.865340    4347 command_runner.go:124] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0810 22:32:57.865354    4347 command_runner.go:124] > #   the runtime executable name, and the runtime executable should be placed
	I0810 22:32:57.865363    4347 command_runner.go:124] > #   in $PATH.
	I0810 22:32:57.865372    4347 command_runner.go:124] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0810 22:32:57.865383    4347 command_runner.go:124] > #   omitted, an "oci" runtime is assumed.
	I0810 22:32:57.865396    4347 command_runner.go:124] > # - runtime_root (optional, string): root directory for storage of containers
	I0810 22:32:57.865406    4347 command_runner.go:124] > #   state.
	I0810 22:32:57.865419    4347 command_runner.go:124] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0810 22:32:57.865431    4347 command_runner.go:124] > #   host devices from being passed to privileged containers.
	I0810 22:32:57.865445    4347 command_runner.go:124] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0810 22:32:57.865460    4347 command_runner.go:124] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0810 22:32:57.865471    4347 command_runner.go:124] > #   The currently recognized values are:
	I0810 22:32:57.865485    4347 command_runner.go:124] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0810 22:32:57.865496    4347 command_runner.go:124] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0810 22:32:57.865509    4347 command_runner.go:124] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0810 22:32:57.865519    4347 command_runner.go:124] > [crio.runtime.runtimes.runc]
	I0810 22:32:57.865527    4347 command_runner.go:124] > runtime_path = "/usr/bin/runc"
	I0810 22:32:57.865535    4347 command_runner.go:124] > runtime_type = "oci"
	I0810 22:32:57.865543    4347 command_runner.go:124] > runtime_root = "/run/runc"
	I0810 22:32:57.865553    4347 command_runner.go:124] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0810 22:32:57.865559    4347 command_runner.go:124] > # running containers
	I0810 22:32:57.865567    4347 command_runner.go:124] > #[crio.runtime.runtimes.crun]
	I0810 22:32:57.865580    4347 command_runner.go:124] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0810 22:32:57.865591    4347 command_runner.go:124] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0810 22:32:57.865605    4347 command_runner.go:124] > # surface and mitigating the consequences of containers breakout.
	I0810 22:32:57.865618    4347 command_runner.go:124] > # Kata Containers with the default configured VMM
	I0810 22:32:57.865629    4347 command_runner.go:124] > #[crio.runtime.runtimes.kata-runtime]
	I0810 22:32:57.865637    4347 command_runner.go:124] > # Kata Containers with the QEMU VMM
	I0810 22:32:57.865645    4347 command_runner.go:124] > #[crio.runtime.runtimes.kata-qemu]
	I0810 22:32:57.865653    4347 command_runner.go:124] > # Kata Containers with the Firecracker VMM
	I0810 22:32:57.865663    4347 command_runner.go:124] > #[crio.runtime.runtimes.kata-fc]
	I0810 22:32:57.865675    4347 command_runner.go:124] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0810 22:32:57.865683    4347 command_runner.go:124] > #
	I0810 22:32:57.865693    4347 command_runner.go:124] > # CRI-O reads its configured registries defaults from the system wide
	I0810 22:32:57.865706    4347 command_runner.go:124] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0810 22:32:57.865719    4347 command_runner.go:124] > # you want to modify just CRI-O, you can change the registries configuration in
	I0810 22:32:57.865731    4347 command_runner.go:124] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0810 22:32:57.865738    4347 command_runner.go:124] > # use the system's defaults from /etc/containers/registries.conf.
	I0810 22:32:57.865746    4347 command_runner.go:124] > [crio.image]
	I0810 22:32:57.865768    4347 command_runner.go:124] > # Default transport for pulling images from a remote container storage.
	I0810 22:32:57.865779    4347 command_runner.go:124] > default_transport = "docker://"
	I0810 22:32:57.865791    4347 command_runner.go:124] > # The path to a file containing credentials necessary for pulling images from
	I0810 22:32:57.865804    4347 command_runner.go:124] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0810 22:32:57.865816    4347 command_runner.go:124] > global_auth_file = ""
	I0810 22:32:57.865828    4347 command_runner.go:124] > # The image used to instantiate infra containers.
	I0810 22:32:57.865836    4347 command_runner.go:124] > # This option supports live configuration reload.
	I0810 22:32:57.865847    4347 command_runner.go:124] > pause_image = "k8s.gcr.io/pause:3.4.1"
	I0810 22:32:57.865858    4347 command_runner.go:124] > # The path to a file containing credentials specific for pulling the pause_image from
	I0810 22:32:57.865871    4347 command_runner.go:124] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0810 22:32:57.865882    4347 command_runner.go:124] > # This option supports live configuration reload.
	I0810 22:32:57.865893    4347 command_runner.go:124] > pause_image_auth_file = ""
	I0810 22:32:57.865904    4347 command_runner.go:124] > # The command to run to have a container stay in the paused state.
	I0810 22:32:57.865913    4347 command_runner.go:124] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0810 22:32:57.865925    4347 command_runner.go:124] > # specified in the pause image. When commented out, it will fallback to the
	I0810 22:32:57.865935    4347 command_runner.go:124] > # default: "/pause". This option supports live configuration reload.
	I0810 22:32:57.865945    4347 command_runner.go:124] > pause_command = "/pause"
	I0810 22:32:57.865957    4347 command_runner.go:124] > # Path to the file which decides what sort of policy we use when deciding
	I0810 22:32:57.865970    4347 command_runner.go:124] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0810 22:32:57.865983    4347 command_runner.go:124] > # this option be used, as the default behavior of using the system-wide default
	I0810 22:32:57.865995    4347 command_runner.go:124] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0810 22:32:57.866004    4347 command_runner.go:124] > # refer to containers-policy.json(5) for more details.
	I0810 22:32:57.866009    4347 command_runner.go:124] > signature_policy = ""
	I0810 22:32:57.866021    4347 command_runner.go:124] > # List of registries to skip TLS verification for pulling images. Please
	I0810 22:32:57.866032    4347 command_runner.go:124] > # consider configuring the registries via /etc/containers/registries.conf before
	I0810 22:32:57.866042    4347 command_runner.go:124] > # changing them here.
	I0810 22:32:57.866049    4347 command_runner.go:124] > #insecure_registries = "[]"
	I0810 22:32:57.866061    4347 command_runner.go:124] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0810 22:32:57.866070    4347 command_runner.go:124] > # ignore; the latter will ignore volumes entirely.
	I0810 22:32:57.866078    4347 command_runner.go:124] > image_volumes = "mkdir"
	I0810 22:32:57.866088    4347 command_runner.go:124] > # List of registries to be used when pulling an unqualified image (e.g.,
	I0810 22:32:57.866097    4347 command_runner.go:124] > # "alpine:latest"). By default, registries is set to "docker.io" for
	I0810 22:32:57.866107    4347 command_runner.go:124] > # compatibility reasons. Depending on your workload and usecase you may add more
	I0810 22:32:57.866120    4347 command_runner.go:124] > # registries (e.g., "quay.io", "registry.fedoraproject.org",
	I0810 22:32:57.866129    4347 command_runner.go:124] > # "registry.opensuse.org", etc.).
	I0810 22:32:57.866135    4347 command_runner.go:124] > #registries = [
	I0810 22:32:57.866145    4347 command_runner.go:124] > # 	"docker.io",
	I0810 22:32:57.866151    4347 command_runner.go:124] > #]
	I0810 22:32:57.866160    4347 command_runner.go:124] > # Temporary directory to use for storing big files
	I0810 22:32:57.866168    4347 command_runner.go:124] > big_files_temporary_dir = ""
	I0810 22:32:57.866179    4347 command_runner.go:124] > # The crio.network table containers settings pertaining to the management of
	I0810 22:32:57.866184    4347 command_runner.go:124] > # CNI plugins.
	I0810 22:32:57.866192    4347 command_runner.go:124] > [crio.network]
	I0810 22:32:57.866202    4347 command_runner.go:124] > # The default CNI network name to be selected. If not set or "", then
	I0810 22:32:57.866216    4347 command_runner.go:124] > # CRI-O will pick-up the first one found in network_dir.
	I0810 22:32:57.866225    4347 command_runner.go:124] > # cni_default_network = "kindnet"
	I0810 22:32:57.866235    4347 command_runner.go:124] > # Path to the directory where CNI configuration files are located.
	I0810 22:32:57.866245    4347 command_runner.go:124] > network_dir = "/etc/cni/net.d/"
	I0810 22:32:57.866254    4347 command_runner.go:124] > # Paths to directories where CNI plugin binaries are located.
	I0810 22:32:57.866264    4347 command_runner.go:124] > plugin_dirs = [
	I0810 22:32:57.866271    4347 command_runner.go:124] > 	"/opt/cni/bin/",
	I0810 22:32:57.866275    4347 command_runner.go:124] > ]
	I0810 22:32:57.866282    4347 command_runner.go:124] > # A necessary configuration for Prometheus based metrics retrieval
	I0810 22:32:57.866291    4347 command_runner.go:124] > [crio.metrics]
	I0810 22:32:57.866300    4347 command_runner.go:124] > # Globally enable or disable metrics support.
	I0810 22:32:57.866310    4347 command_runner.go:124] > enable_metrics = true
	I0810 22:32:57.866322    4347 command_runner.go:124] > # The port on which the metrics server will listen.
	I0810 22:32:57.866331    4347 command_runner.go:124] > metrics_port = 9090
	I0810 22:32:57.866401    4347 command_runner.go:124] > # Local socket path to bind the metrics server to
	I0810 22:32:57.866416    4347 command_runner.go:124] > metrics_socket = ""
	I0810 22:32:57.866486    4347 cni.go:93] Creating CNI manager for ""
	I0810 22:32:57.866504    4347 cni.go:154] 1 nodes found, recommending kindnet
	I0810 22:32:57.866519    4347 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0810 22:32:57.866536    4347 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.32 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-20210810223223-30291 NodeName:multinode-20210810223223-30291 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.50.32 CgroupDriver:systemd ClientCAFile:/v
ar/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0810 22:32:57.866698    4347 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.32
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "multinode-20210810223223-30291"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0810 22:32:57.866809    4347 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --hostname-override=multinode-20210810223223-30291 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.32 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:multinode-20210810223223-30291 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0810 22:32:57.866876    4347 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0810 22:32:57.874702    4347 command_runner.go:124] > kubeadm
	I0810 22:32:57.874716    4347 command_runner.go:124] > kubectl
	I0810 22:32:57.874720    4347 command_runner.go:124] > kubelet
	I0810 22:32:57.874911    4347 binaries.go:44] Found k8s binaries, skipping transfer
	I0810 22:32:57.874965    4347 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0810 22:32:57.881899    4347 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (510 bytes)
	I0810 22:32:57.893370    4347 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0810 22:32:57.904537    4347 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2074 bytes)
	I0810 22:32:57.915861    4347 ssh_runner.go:149] Run: grep 192.168.50.32	control-plane.minikube.internal$ /etc/hosts
	I0810 22:32:57.920843    4347 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.32	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0810 22:32:57.931480    4347 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291 for IP: 192.168.50.32
	I0810 22:32:57.931530    4347 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key
	I0810 22:32:57.931547    4347 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key
	I0810 22:32:57.931595    4347 certs.go:294] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/client.key
	I0810 22:32:57.931610    4347 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/client.crt with IP's: []
	I0810 22:32:58.003323    4347 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/client.crt ...
	I0810 22:32:58.003356    4347 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/client.crt: {Name:mk17a539d20321f4db5af5b2734d077b910d767c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:32:58.003566    4347 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/client.key ...
	I0810 22:32:58.003579    4347 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/client.key: {Name:mkf91e68e3a24af11429ac7001aa796033230923 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:32:58.003666    4347 certs.go:294] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/apiserver.key.d5d970b2
	I0810 22:32:58.003678    4347 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/apiserver.crt.d5d970b2 with IP's: [192.168.50.32 10.96.0.1 127.0.0.1 10.0.0.1]
	I0810 22:32:58.188567    4347 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/apiserver.crt.d5d970b2 ...
	I0810 22:32:58.188599    4347 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/apiserver.crt.d5d970b2: {Name:mk4c9f9fdbfe34760c33271a67021f8f00eb74cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:32:58.188786    4347 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/apiserver.key.d5d970b2 ...
	I0810 22:32:58.188799    4347 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/apiserver.key.d5d970b2: {Name:mk1765a0ac8d1c92eb5b9f050679d0d9d4659cf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:32:58.188876    4347 certs.go:305] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/apiserver.crt.d5d970b2 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/apiserver.crt
	I0810 22:32:58.188939    4347 certs.go:309] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/apiserver.key.d5d970b2 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/apiserver.key
	I0810 22:32:58.188994    4347 certs.go:294] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/proxy-client.key
	I0810 22:32:58.189002    4347 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/proxy-client.crt with IP's: []
	I0810 22:32:58.299072    4347 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/proxy-client.crt ...
	I0810 22:32:58.299104    4347 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/proxy-client.crt: {Name:mke338d5688758093711da9f55ca5536a523d43a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:32:58.299308    4347 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/proxy-client.key ...
	I0810 22:32:58.299368    4347 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/proxy-client.key: {Name:mk78c6b536821d16696f16ce642cf1181cdc7730 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:32:58.299469    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0810 22:32:58.299488    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0810 22:32:58.299498    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0810 22:32:58.299507    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0810 22:32:58.299519    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0810 22:32:58.299535    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0810 22:32:58.299548    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0810 22:32:58.299561    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0810 22:32:58.299611    4347 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/30291.pem (1338 bytes)
	W0810 22:32:58.299659    4347 certs.go:369] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/30291_empty.pem, impossibly tiny 0 bytes
	I0810 22:32:58.299675    4347 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem (1679 bytes)
	I0810 22:32:58.299708    4347 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem (1082 bytes)
	I0810 22:32:58.299731    4347 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem (1123 bytes)
	I0810 22:32:58.299753    4347 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem (1679 bytes)
	I0810 22:32:58.299797    4347 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/302912.pem (1708 bytes)
	I0810 22:32:58.299824    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0810 22:32:58.299839    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/30291.pem -> /usr/share/ca-certificates/30291.pem
	I0810 22:32:58.299850    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/302912.pem -> /usr/share/ca-certificates/302912.pem
	I0810 22:32:58.300769    4347 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0810 22:32:58.317752    4347 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0810 22:32:58.334154    4347 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0810 22:32:58.350238    4347 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0810 22:32:58.366402    4347 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0810 22:32:58.383292    4347 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0810 22:32:58.399598    4347 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0810 22:32:58.416865    4347 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0810 22:32:58.433080    4347 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0810 22:32:58.449061    4347 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/30291.pem --> /usr/share/ca-certificates/30291.pem (1338 bytes)
	I0810 22:32:58.464946    4347 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/302912.pem --> /usr/share/ca-certificates/302912.pem (1708 bytes)
	I0810 22:32:58.481673    4347 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0810 22:32:58.492957    4347 ssh_runner.go:149] Run: openssl version
	I0810 22:32:58.498798    4347 command_runner.go:124] > OpenSSL 1.1.1k  25 Mar 2021
	I0810 22:32:58.498856    4347 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0810 22:32:58.506504    4347 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0810 22:32:58.511065    4347 command_runner.go:124] > -rw-r--r-- 1 root root 1111 Aug 10 22:18 /usr/share/ca-certificates/minikubeCA.pem
	I0810 22:32:58.511199    4347 certs.go:416] hashing: -rw-r--r-- 1 root root 1111 Aug 10 22:18 /usr/share/ca-certificates/minikubeCA.pem
	I0810 22:32:58.511248    4347 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0810 22:32:58.517002    4347 command_runner.go:124] > b5213941
	I0810 22:32:58.517066    4347 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0810 22:32:58.524968    4347 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/30291.pem && ln -fs /usr/share/ca-certificates/30291.pem /etc/ssl/certs/30291.pem"
	I0810 22:32:58.532847    4347 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/30291.pem
	I0810 22:32:58.537289    4347 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Aug 10 22:27 /usr/share/ca-certificates/30291.pem
	I0810 22:32:58.537315    4347 certs.go:416] hashing: -rw-r--r-- 1 root root 1338 Aug 10 22:27 /usr/share/ca-certificates/30291.pem
	I0810 22:32:58.537348    4347 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/30291.pem
	I0810 22:32:58.543181    4347 command_runner.go:124] > 51391683
	I0810 22:32:58.543227    4347 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/30291.pem /etc/ssl/certs/51391683.0"
	I0810 22:32:58.550803    4347 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/302912.pem && ln -fs /usr/share/ca-certificates/302912.pem /etc/ssl/certs/302912.pem"
	I0810 22:32:58.558884    4347 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/302912.pem
	I0810 22:32:58.564254    4347 command_runner.go:124] > -rw-r--r-- 1 root root 1708 Aug 10 22:27 /usr/share/ca-certificates/302912.pem
	I0810 22:32:58.564286    4347 certs.go:416] hashing: -rw-r--r-- 1 root root 1708 Aug 10 22:27 /usr/share/ca-certificates/302912.pem
	I0810 22:32:58.564324    4347 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/302912.pem
	I0810 22:32:58.570156    4347 command_runner.go:124] > 3ec20f2e
	I0810 22:32:58.570208    4347 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/302912.pem /etc/ssl/certs/3ec20f2e.0"
	I0810 22:32:58.578077    4347 kubeadm.go:390] StartCluster: {Name:multinode-20210810223223-30291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12122/minikube-v1.22.0-1628238775-12122.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:multinode-2
0210810223223-30291 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.32 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0}
	I0810 22:32:58.578160    4347 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0810 22:32:58.578197    4347 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0810 22:32:58.610996    4347 cri.go:76] found id: ""
	I0810 22:32:58.611056    4347 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0810 22:32:58.617953    4347 command_runner.go:124] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0810 22:32:58.618042    4347 command_runner.go:124] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0810 22:32:58.618087    4347 command_runner.go:124] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0810 22:32:58.618303    4347 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0810 22:32:58.625001    4347 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0810 22:32:58.631226    4347 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0810 22:32:58.631253    4347 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0810 22:32:58.631265    4347 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0810 22:32:58.631277    4347 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0810 22:32:58.631317    4347 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0810 22:32:58.631359    4347 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0810 22:32:59.088872    4347 command_runner.go:124] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0810 22:33:19.855261    4347 command_runner.go:124] > [init] Using Kubernetes version: v1.21.3
	I0810 22:33:19.855342    4347 command_runner.go:124] > [preflight] Running pre-flight checks
	I0810 22:33:19.855418    4347 command_runner.go:124] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0810 22:33:19.855553    4347 command_runner.go:124] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0810 22:33:19.855710    4347 command_runner.go:124] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0810 22:33:19.857400    4347 out.go:204]   - Generating certificates and keys ...
	I0810 22:33:19.855905    4347 command_runner.go:124] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0810 22:33:19.857506    4347 command_runner.go:124] > [certs] Using existing ca certificate authority
	I0810 22:33:19.857596    4347 command_runner.go:124] > [certs] Using existing apiserver certificate and key on disk
	I0810 22:33:19.857703    4347 command_runner.go:124] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0810 22:33:19.857789    4347 command_runner.go:124] > [certs] Generating "front-proxy-ca" certificate and key
	I0810 22:33:19.857918    4347 command_runner.go:124] > [certs] Generating "front-proxy-client" certificate and key
	I0810 22:33:19.857998    4347 command_runner.go:124] > [certs] Generating "etcd/ca" certificate and key
	I0810 22:33:19.858067    4347 command_runner.go:124] > [certs] Generating "etcd/server" certificate and key
	I0810 22:33:19.858264    4347 command_runner.go:124] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-20210810223223-30291] and IPs [192.168.50.32 127.0.0.1 ::1]
	I0810 22:33:19.858337    4347 command_runner.go:124] > [certs] Generating "etcd/peer" certificate and key
	I0810 22:33:19.858509    4347 command_runner.go:124] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-20210810223223-30291] and IPs [192.168.50.32 127.0.0.1 ::1]
	I0810 22:33:19.858602    4347 command_runner.go:124] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0810 22:33:19.858694    4347 command_runner.go:124] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0810 22:33:19.858758    4347 command_runner.go:124] > [certs] Generating "sa" key and public key
	I0810 22:33:19.858827    4347 command_runner.go:124] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0810 22:33:19.858890    4347 command_runner.go:124] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0810 22:33:19.858961    4347 command_runner.go:124] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0810 22:33:19.859061    4347 command_runner.go:124] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0810 22:33:19.859143    4347 command_runner.go:124] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0810 22:33:19.859278    4347 command_runner.go:124] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0810 22:33:19.859383    4347 command_runner.go:124] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0810 22:33:19.859439    4347 command_runner.go:124] > [kubelet-start] Starting the kubelet
	I0810 22:33:19.860954    4347 out.go:204]   - Booting up control plane ...
	I0810 22:33:19.859602    4347 command_runner.go:124] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0810 22:33:19.861065    4347 command_runner.go:124] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0810 22:33:19.861163    4347 command_runner.go:124] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0810 22:33:19.861244    4347 command_runner.go:124] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0810 22:33:19.861352    4347 command_runner.go:124] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0810 22:33:19.861519    4347 command_runner.go:124] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0810 22:33:19.861617    4347 command_runner.go:124] > [apiclient] All control plane components are healthy after 16.015766 seconds
	I0810 22:33:19.861760    4347 command_runner.go:124] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0810 22:33:19.861936    4347 command_runner.go:124] > [kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
	I0810 22:33:19.861992    4347 command_runner.go:124] > [upload-certs] Skipping phase. Please see --upload-certs
	I0810 22:33:19.862191    4347 command_runner.go:124] > [mark-control-plane] Marking the node multinode-20210810223223-30291 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0810 22:33:19.863747    4347 out.go:204]   - Configuring RBAC rules ...
	I0810 22:33:19.862296    4347 command_runner.go:124] > [bootstrap-token] Using token: jxsfae.kz2mrngz77ughh9a
	I0810 22:33:19.863866    4347 command_runner.go:124] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0810 22:33:19.863971    4347 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0810 22:33:19.864151    4347 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0810 22:33:19.864278    4347 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0810 22:33:19.864376    4347 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0810 22:33:19.864457    4347 command_runner.go:124] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0810 22:33:19.864553    4347 command_runner.go:124] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0810 22:33:19.864598    4347 command_runner.go:124] > [addons] Applied essential addon: CoreDNS
	I0810 22:33:19.864641    4347 command_runner.go:124] > [addons] Applied essential addon: kube-proxy
	I0810 22:33:19.864693    4347 command_runner.go:124] > Your Kubernetes control-plane has initialized successfully!
	I0810 22:33:19.864767    4347 command_runner.go:124] > To start using your cluster, you need to run the following as a regular user:
	I0810 22:33:19.864820    4347 command_runner.go:124] >   mkdir -p $HOME/.kube
	I0810 22:33:19.864910    4347 command_runner.go:124] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0810 22:33:19.865001    4347 command_runner.go:124] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0810 22:33:19.865090    4347 command_runner.go:124] > Alternatively, if you are the root user, you can run:
	I0810 22:33:19.865166    4347 command_runner.go:124] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0810 22:33:19.865245    4347 command_runner.go:124] > You should now deploy a pod network to the cluster.
	I0810 22:33:19.865320    4347 command_runner.go:124] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0810 22:33:19.865381    4347 command_runner.go:124] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0810 22:33:19.865452    4347 command_runner.go:124] > You can now join any number of control-plane nodes by copying certificate authorities
	I0810 22:33:19.865518    4347 command_runner.go:124] > and service account keys on each node and then running the following as root:
	I0810 22:33:19.865595    4347 command_runner.go:124] >   kubeadm join control-plane.minikube.internal:8443 --token jxsfae.kz2mrngz77ughh9a \
	I0810 22:33:19.865682    4347 command_runner.go:124] > 	--discovery-token-ca-cert-hash sha256:792de24c5d5a120bf4aa3a25755c9ac1b4ccaeb2dbca2444b5b705903a56bd34 \
	I0810 22:33:19.865701    4347 command_runner.go:124] > 	--control-plane 
	I0810 22:33:19.865795    4347 command_runner.go:124] > Then you can join any number of worker nodes by running the following on each as root:
	I0810 22:33:19.865943    4347 command_runner.go:124] > kubeadm join control-plane.minikube.internal:8443 --token jxsfae.kz2mrngz77ughh9a \
	I0810 22:33:19.866030    4347 command_runner.go:124] > 	--discovery-token-ca-cert-hash sha256:792de24c5d5a120bf4aa3a25755c9ac1b4ccaeb2dbca2444b5b705903a56bd34 
	I0810 22:33:19.866052    4347 cni.go:93] Creating CNI manager for ""
	I0810 22:33:19.866061    4347 cni.go:154] 1 nodes found, recommending kindnet
	I0810 22:33:19.867722    4347 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0810 22:33:19.867788    4347 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0810 22:33:19.876032    4347 command_runner.go:124] >   File: /opt/cni/bin/portmap
	I0810 22:33:19.876052    4347 command_runner.go:124] >   Size: 2853400   	Blocks: 5576       IO Block: 4096   regular file
	I0810 22:33:19.876059    4347 command_runner.go:124] > Device: 10h/16d	Inode: 22873       Links: 1
	I0810 22:33:19.876069    4347 command_runner.go:124] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0810 22:33:19.876077    4347 command_runner.go:124] > Access: 2021-08-10 22:32:38.220478056 +0000
	I0810 22:33:19.876087    4347 command_runner.go:124] > Modify: 2021-08-06 09:23:24.000000000 +0000
	I0810 22:33:19.876095    4347 command_runner.go:124] > Change: 2021-08-10 22:32:33.951478056 +0000
	I0810 22:33:19.876102    4347 command_runner.go:124] >  Birth: -
	I0810 22:33:19.876486    4347 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0810 22:33:19.876501    4347 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0810 22:33:19.912205    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0810 22:33:20.321470    4347 command_runner.go:124] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0810 22:33:20.350894    4347 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0810 22:33:20.370625    4347 command_runner.go:124] > serviceaccount/kindnet created
	I0810 22:33:20.389242    4347 command_runner.go:124] > daemonset.apps/kindnet created
	I0810 22:33:20.391588    4347 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0810 22:33:20.391674    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:33:20.391693    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=877a5691753f15214a0c269ac69dcdc5a4d99fcd minikube.k8s.io/name=multinode-20210810223223-30291 minikube.k8s.io/updated_at=2021_08_10T22_33_20_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:33:20.407345    4347 command_runner.go:124] > -16
	I0810 22:33:20.407383    4347 ops.go:34] apiserver oom_adj: -16
	I0810 22:33:20.570795    4347 command_runner.go:124] > node/multinode-20210810223223-30291 labeled
	I0810 22:33:20.572850    4347 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0810 22:33:20.572929    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:33:20.680992    4347 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:33:21.183711    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:33:21.285793    4347 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:33:21.683325    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:33:21.787393    4347 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:33:22.183217    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:33:22.280960    4347 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:33:22.683627    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:33:22.786300    4347 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:33:23.184001    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:33:23.281890    4347 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:33:23.683719    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:33:23.783715    4347 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:33:24.183071    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:33:24.283417    4347 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:33:24.683673    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:33:24.782075    4347 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:33:25.183170    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:33:25.462085    4347 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:33:25.683395    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:33:25.808072    4347 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:33:26.183714    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:33:26.287318    4347 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:33:26.684099    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:33:26.788608    4347 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:33:27.183123    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:33:27.295503    4347 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:33:27.683181    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:33:27.790159    4347 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:33:28.183200    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:33:28.289849    4347 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:33:28.683553    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:33:28.794165    4347 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:33:29.184031    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:33:29.289855    4347 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:33:29.683422    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:33:29.949111    4347 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:33:30.183826    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:33:30.301589    4347 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:33:30.683318    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:33:30.795011    4347 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:33:31.183583    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:33:31.317516    4347 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:33:31.683042    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:33:31.802754    4347 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:33:32.183318    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:33:32.312324    4347 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:33:32.683672    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:33:32.805073    4347 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0810 22:33:33.184046    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0810 22:33:33.348038    4347 command_runner.go:124] > NAME      SECRETS   AGE
	I0810 22:33:33.348064    4347 command_runner.go:124] > default   0         0s
	I0810 22:33:33.349048    4347 kubeadm.go:985] duration metric: took 12.957439157s to wait for elevateKubeSystemPrivileges.
	I0810 22:33:33.349073    4347 kubeadm.go:392] StartCluster complete in 34.771003448s
	I0810 22:33:33.349094    4347 settings.go:142] acquiring lock: {Name:mk9de8b97604ec8ec02e9734983b03b6308517c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:33:33.349231    4347 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0810 22:33:33.350223    4347 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig: {Name:mkb7fc7bcea695301999150daa705ac3e8a4c8a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:33:33.350677    4347 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0810 22:33:33.350929    4347 kapi.go:59] client config for multinode-20210810223223-30291: &rest.Config{Host:"https://192.168.50.32:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-302
91/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2660), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0810 22:33:33.351462    4347 cert_rotation.go:137] Starting client certificate rotation controller
	I0810 22:33:33.353044    4347 round_trippers.go:432] GET https://192.168.50.32:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0810 22:33:33.353062    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:33.353069    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:33.353075    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:33.371626    4347 round_trippers.go:457] Response Status: 200 OK in 18 milliseconds
	I0810 22:33:33.371650    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:33.371656    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:33.371661    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:33.371665    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:33.371670    4347 round_trippers.go:463]     Content-Length: 291
	I0810 22:33:33.371677    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:33 GMT
	I0810 22:33:33.371684    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:33.374939    4347 request.go:1123] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"8e58ff6b-1a8a-468a-822b-501079499c83","resourceVersion":"263","creationTimestamp":"2021-08-10T22:33:19Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0810 22:33:33.375876    4347 request.go:1123] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"8e58ff6b-1a8a-468a-822b-501079499c83","resourceVersion":"263","creationTimestamp":"2021-08-10T22:33:19Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0810 22:33:33.375961    4347 round_trippers.go:432] PUT https://192.168.50.32:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0810 22:33:33.375976    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:33.375985    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:33.375993    4347 round_trippers.go:442]     Content-Type: application/json
	I0810 22:33:33.375999    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:33.386678    4347 round_trippers.go:457] Response Status: 200 OK in 10 milliseconds
	I0810 22:33:33.386698    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:33.386704    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:33.386708    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:33.386711    4347 round_trippers.go:463]     Content-Length: 291
	I0810 22:33:33.386714    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:33 GMT
	I0810 22:33:33.386717    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:33.386720    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:33.387457    4347 request.go:1123] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"8e58ff6b-1a8a-468a-822b-501079499c83","resourceVersion":"400","creationTimestamp":"2021-08-10T22:33:19Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0810 22:33:33.887951    4347 round_trippers.go:432] GET https://192.168.50.32:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0810 22:33:33.887978    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:33.887984    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:33.887989    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:33.913547    4347 round_trippers.go:457] Response Status: 200 OK in 25 milliseconds
	I0810 22:33:33.913578    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:33.913586    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:33.913591    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:33.913595    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:33.913600    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:33.913604    4347 round_trippers.go:463]     Content-Length: 291
	I0810 22:33:33.913609    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:33 GMT
	I0810 22:33:33.921020    4347 request.go:1123] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"8e58ff6b-1a8a-468a-822b-501079499c83","resourceVersion":"413","creationTimestamp":"2021-08-10T22:33:19Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0810 22:33:33.921152    4347 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "multinode-20210810223223-30291" rescaled to 1
	I0810 22:33:33.921211    4347 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.50.32 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0810 22:33:33.922878    4347 out.go:177] * Verifying Kubernetes components...
	I0810 22:33:33.922947    4347 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0810 22:33:33.921263    4347 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0810 22:33:33.921286    4347 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0810 22:33:33.923064    4347 addons.go:59] Setting storage-provisioner=true in profile "multinode-20210810223223-30291"
	I0810 22:33:33.923087    4347 addons.go:135] Setting addon storage-provisioner=true in "multinode-20210810223223-30291"
	W0810 22:33:33.923094    4347 addons.go:147] addon storage-provisioner should already be in state true
	I0810 22:33:33.923123    4347 host.go:66] Checking if "multinode-20210810223223-30291" exists ...
	I0810 22:33:33.923065    4347 addons.go:59] Setting default-storageclass=true in profile "multinode-20210810223223-30291"
	I0810 22:33:33.923170    4347 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-20210810223223-30291"
	I0810 22:33:33.923559    4347 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:33:33.923603    4347 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:33:33.923615    4347 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:33:33.923657    4347 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:33:33.935187    4347 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:32915
	I0810 22:33:33.935713    4347 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:33:33.936343    4347 main.go:130] libmachine: Using API Version  1
	I0810 22:33:33.936389    4347 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:33:33.936785    4347 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:33:33.937293    4347 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:33:33.937343    4347 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:33:33.945252    4347 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35925
	I0810 22:33:33.945682    4347 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:33:33.946201    4347 main.go:130] libmachine: Using API Version  1
	I0810 22:33:33.946233    4347 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:33:33.946626    4347 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:33:33.946838    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetState
	I0810 22:33:33.948560    4347 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35591
	I0810 22:33:33.948970    4347 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:33:33.949430    4347 main.go:130] libmachine: Using API Version  1
	I0810 22:33:33.949456    4347 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:33:33.949782    4347 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:33:33.949963    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetState
	I0810 22:33:33.951183    4347 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0810 22:33:33.951464    4347 kapi.go:59] client config for multinode-20210810223223-30291: &rest.Config{Host:"https://192.168.50.32:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-302
91/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2660), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0810 22:33:33.953059    4347 round_trippers.go:432] GET https://192.168.50.32:8443/apis/storage.k8s.io/v1/storageclasses
	I0810 22:33:33.953077    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:33.953090    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:33.953097    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:33.953137    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .DriverName
	I0810 22:33:33.955126    4347 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0810 22:33:33.955233    4347 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0810 22:33:33.955248    4347 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0810 22:33:33.955271    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHHostname
	I0810 22:33:33.960477    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:33:33.960870    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:89", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:32:38 +0000 UTC Type:0 Mac:52:54:00:ce:d8:89 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:multinode-20210810223223-30291 Clientid:01:52:54:00:ce:d8:89}
	I0810 22:33:33.960904    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined IP address 192.168.50.32 and MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:33:33.960995    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHPort
	I0810 22:33:33.961169    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHKeyPath
	I0810 22:33:33.961314    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHUsername
	I0810 22:33:33.961476    4347 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223223-30291/id_rsa Username:docker}
	I0810 22:33:33.965291    4347 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0810 22:33:33.965523    4347 kapi.go:59] client config for multinode-20210810223223-30291: &rest.Config{Host:"https://192.168.50.32:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-302
91/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2660), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0810 22:33:33.965934    4347 round_trippers.go:457] Response Status: 200 OK in 12 milliseconds
	I0810 22:33:33.965949    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:33.965954    4347 round_trippers.go:463]     Content-Length: 109
	I0810 22:33:33.965959    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:33 GMT
	I0810 22:33:33.965963    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:33.965968    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:33.965972    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:33.965976    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:33.966008    4347 request.go:1123] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"427"},"items":[]}
	I0810 22:33:33.966591    4347 addons.go:135] Setting addon default-storageclass=true in "multinode-20210810223223-30291"
	W0810 22:33:33.966608    4347 addons.go:147] addon default-storageclass should already be in state true
	I0810 22:33:33.966635    4347 host.go:66] Checking if "multinode-20210810223223-30291" exists ...
	I0810 22:33:33.966794    4347 node_ready.go:35] waiting up to 6m0s for node "multinode-20210810223223-30291" to be "Ready" ...
	I0810 22:33:33.966867    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291
	I0810 22:33:33.966878    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:33.966885    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:33.966895    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:33.966961    4347 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:33:33.967011    4347 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:33:33.973657    4347 round_trippers.go:457] Response Status: 200 OK in 6 milliseconds
	I0810 22:33:33.973677    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:33.973684    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:33.973689    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:33.973693    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:33.973698    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:33 GMT
	I0810 22:33:33.973702    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:33.978021    4347 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42673
	I0810 22:33:33.978515    4347 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:33:33.979008    4347 main.go:130] libmachine: Using API Version  1
	I0810 22:33:33.979030    4347 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:33:33.979385    4347 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:33:33.979875    4347 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:33:33.979912    4347 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:33:33.980553    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","resourceVersion":"406","creationTimestamp":"2021-08-10T22:33:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223223-30291","minikube.k8s.io/updated_at":"2021_08_10T22_33_20_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22 [truncated 6553 chars]
	I0810 22:33:33.981915    4347 node_ready.go:49] node "multinode-20210810223223-30291" has status "Ready":"True"
	I0810 22:33:33.981932    4347 node_ready.go:38] duration metric: took 15.119864ms waiting for node "multinode-20210810223223-30291" to be "Ready" ...
	I0810 22:33:33.981944    4347 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0810 22:33:33.982029    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods
	I0810 22:33:33.982045    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:33.982052    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:33.982058    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:33.990499    4347 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43719
	I0810 22:33:33.990938    4347 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:33:33.991400    4347 main.go:130] libmachine: Using API Version  1
	I0810 22:33:33.991426    4347 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:33:33.991750    4347 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:33:33.991947    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetState
	I0810 22:33:33.994945    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .DriverName
	I0810 22:33:33.995199    4347 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0810 22:33:33.995219    4347 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0810 22:33:33.995240    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHHostname
	I0810 22:33:33.996016    4347 round_trippers.go:457] Response Status: 200 OK in 13 milliseconds
	I0810 22:33:33.996033    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:33.996039    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:33.996045    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:33 GMT
	I0810 22:33:33.996058    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:33.996065    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:33.996070    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:34.000696    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:33:34.001087    4347 request.go:1123] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"429"},"items":[{"metadata":{"name":"etcd-multinode-20210810223223-30291","namespace":"kube-system","uid":"8498143e-4386-44bc-9541-3193bd504c1d","resourceVersion":"297","creationTimestamp":"2021-08-10T22:33:25Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.50.32:2379","kubernetes.io/config.hash":"ee4e4232c1192224bf90edfa1030cde5","kubernetes.io/config.mirror":"ee4e4232c1192224bf90edfa1030cde5","kubernetes.io/config.seen":"2021-08-10T22:33:24.968043630Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:33:28Z","fieldsType":"FieldsV1","fieldsV1":{
"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.adv [truncated 40045 chars]
	I0810 22:33:34.001123    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:89", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:32:38 +0000 UTC Type:0 Mac:52:54:00:ce:d8:89 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:multinode-20210810223223-30291 Clientid:01:52:54:00:ce:d8:89}
	I0810 22:33:34.001150    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined IP address 192.168.50.32 and MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:33:34.001308    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHPort
	I0810 22:33:34.001469    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHKeyPath
	I0810 22:33:34.001608    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHUsername
	I0810 22:33:34.001753    4347 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223223-30291/id_rsa Username:docker}
	I0810 22:33:34.008864    4347 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-20210810223223-30291" in "kube-system" namespace to be "Ready" ...
	I0810 22:33:34.008964    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210810223223-30291
	I0810 22:33:34.008979    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:34.008987    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:34.008993    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:34.021173    4347 round_trippers.go:457] Response Status: 200 OK in 12 milliseconds
	I0810 22:33:34.021193    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:34.021199    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:34.021203    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:34.021208    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:34.021213    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:34.021218    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:34 GMT
	I0810 22:33:34.027109    4347 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210810223223-30291","namespace":"kube-system","uid":"8498143e-4386-44bc-9541-3193bd504c1d","resourceVersion":"297","creationTimestamp":"2021-08-10T22:33:25Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.50.32:2379","kubernetes.io/config.hash":"ee4e4232c1192224bf90edfa1030cde5","kubernetes.io/config.mirror":"ee4e4232c1192224bf90edfa1030cde5","kubernetes.io/config.seen":"2021-08-10T22:33:24.968043630Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:33:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash [truncated 5793 chars]
	I0810 22:33:34.033222    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291
	I0810 22:33:34.033243    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:34.033251    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:34.033256    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:34.063731    4347 round_trippers.go:457] Response Status: 200 OK in 30 milliseconds
	I0810 22:33:34.063752    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:34.063758    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:34.063762    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:34.063766    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:34.063773    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:34.063777    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:34 GMT
	I0810 22:33:34.064156    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","resourceVersion":"406","creationTimestamp":"2021-08-10T22:33:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223223-30291","minikube.k8s.io/updated_at":"2021_08_10T22_33_20_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22 [truncated 6553 chars]
	I0810 22:33:34.368394    4347 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0810 22:33:34.441758    4347 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0810 22:33:34.466297    4347 command_runner.go:124] > apiVersion: v1
	I0810 22:33:34.466317    4347 command_runner.go:124] > data:
	I0810 22:33:34.466321    4347 command_runner.go:124] >   Corefile: |
	I0810 22:33:34.466325    4347 command_runner.go:124] >     .:53 {
	I0810 22:33:34.466330    4347 command_runner.go:124] >         errors
	I0810 22:33:34.466335    4347 command_runner.go:124] >         health {
	I0810 22:33:34.466339    4347 command_runner.go:124] >            lameduck 5s
	I0810 22:33:34.466343    4347 command_runner.go:124] >         }
	I0810 22:33:34.466346    4347 command_runner.go:124] >         ready
	I0810 22:33:34.466353    4347 command_runner.go:124] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0810 22:33:34.466357    4347 command_runner.go:124] >            pods insecure
	I0810 22:33:34.466362    4347 command_runner.go:124] >            fallthrough in-addr.arpa ip6.arpa
	I0810 22:33:34.466367    4347 command_runner.go:124] >            ttl 30
	I0810 22:33:34.466371    4347 command_runner.go:124] >         }
	I0810 22:33:34.466375    4347 command_runner.go:124] >         prometheus :9153
	I0810 22:33:34.466380    4347 command_runner.go:124] >         forward . /etc/resolv.conf {
	I0810 22:33:34.466388    4347 command_runner.go:124] >            max_concurrent 1000
	I0810 22:33:34.466391    4347 command_runner.go:124] >         }
	I0810 22:33:34.466395    4347 command_runner.go:124] >         cache 30
	I0810 22:33:34.466400    4347 command_runner.go:124] >         loop
	I0810 22:33:34.466403    4347 command_runner.go:124] >         reload
	I0810 22:33:34.466407    4347 command_runner.go:124] >         loadbalance
	I0810 22:33:34.466411    4347 command_runner.go:124] >     }
	I0810 22:33:34.466415    4347 command_runner.go:124] > kind: ConfigMap
	I0810 22:33:34.466422    4347 command_runner.go:124] > metadata:
	I0810 22:33:34.466428    4347 command_runner.go:124] >   creationTimestamp: "2021-08-10T22:33:19Z"
	I0810 22:33:34.466434    4347 command_runner.go:124] >   name: coredns
	I0810 22:33:34.466439    4347 command_runner.go:124] >   namespace: kube-system
	I0810 22:33:34.466444    4347 command_runner.go:124] >   resourceVersion: "255"
	I0810 22:33:34.466449    4347 command_runner.go:124] >   uid: 4c6f7d11-ffe0-48dd-ab28-31bb819ab94b
	I0810 22:33:34.490421    4347 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0810 22:33:34.565515    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210810223223-30291
	I0810 22:33:34.565543    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:34.565552    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:34.565556    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:34.569085    4347 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:33:34.569101    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:34.569106    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:34.569109    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:34.569112    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:34.569115    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:34.569118    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:34 GMT
	I0810 22:33:34.569785    4347 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210810223223-30291","namespace":"kube-system","uid":"8498143e-4386-44bc-9541-3193bd504c1d","resourceVersion":"297","creationTimestamp":"2021-08-10T22:33:25Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.50.32:2379","kubernetes.io/config.hash":"ee4e4232c1192224bf90edfa1030cde5","kubernetes.io/config.mirror":"ee4e4232c1192224bf90edfa1030cde5","kubernetes.io/config.seen":"2021-08-10T22:33:24.968043630Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:33:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash [truncated 5793 chars]
	I0810 22:33:34.570092    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291
	I0810 22:33:34.570103    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:34.570108    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:34.570112    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:34.573028    4347 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:33:34.573043    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:34.573047    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:34.573051    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:34.573054    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:34.573057    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:34.573060    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:34 GMT
	I0810 22:33:34.574042    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","resourceVersion":"406","creationTimestamp":"2021-08-10T22:33:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223223-30291","minikube.k8s.io/updated_at":"2021_08_10T22_33_20_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22 [truncated 6553 chars]
	I0810 22:33:35.064683    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210810223223-30291
	I0810 22:33:35.064709    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:35.064715    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:35.064720    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:35.067614    4347 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:33:35.067631    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:35.067635    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:35 GMT
	I0810 22:33:35.067638    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:35.067641    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:35.067644    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:35.067647    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:35.068032    4347 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210810223223-30291","namespace":"kube-system","uid":"8498143e-4386-44bc-9541-3193bd504c1d","resourceVersion":"297","creationTimestamp":"2021-08-10T22:33:25Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.50.32:2379","kubernetes.io/config.hash":"ee4e4232c1192224bf90edfa1030cde5","kubernetes.io/config.mirror":"ee4e4232c1192224bf90edfa1030cde5","kubernetes.io/config.seen":"2021-08-10T22:33:24.968043630Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:33:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash [truncated 5793 chars]
	I0810 22:33:35.068357    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291
	I0810 22:33:35.068370    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:35.068375    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:35.068380    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:35.070355    4347 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:33:35.070372    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:35.070377    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:35.070382    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:35 GMT
	I0810 22:33:35.070387    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:35.070391    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:35.070396    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:35.070774    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","resourceVersion":"406","creationTimestamp":"2021-08-10T22:33:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223223-30291","minikube.k8s.io/updated_at":"2021_08_10T22_33_20_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22 [truncated 6553 chars]
	I0810 22:33:35.565447    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210810223223-30291
	I0810 22:33:35.565470    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:35.565476    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:35.565480    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:35.568430    4347 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:33:35.568455    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:35.568462    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:35.568467    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:35.568471    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:35.568476    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:35.568480    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:35 GMT
	I0810 22:33:35.569359    4347 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210810223223-30291","namespace":"kube-system","uid":"8498143e-4386-44bc-9541-3193bd504c1d","resourceVersion":"297","creationTimestamp":"2021-08-10T22:33:25Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.50.32:2379","kubernetes.io/config.hash":"ee4e4232c1192224bf90edfa1030cde5","kubernetes.io/config.mirror":"ee4e4232c1192224bf90edfa1030cde5","kubernetes.io/config.seen":"2021-08-10T22:33:24.968043630Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:33:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash [truncated 5793 chars]
	I0810 22:33:35.569799    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291
	I0810 22:33:35.569818    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:35.569825    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:35.569835    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:35.571756    4347 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:33:35.571771    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:35.571775    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:35.571779    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:35.571782    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:35.571790    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:35.571800    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:35 GMT
	I0810 22:33:35.572191    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","resourceVersion":"406","creationTimestamp":"2021-08-10T22:33:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223223-30291","minikube.k8s.io/updated_at":"2021_08_10T22_33_20_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22 [truncated 6553 chars]
	I0810 22:33:36.049911    4347 command_runner.go:124] > serviceaccount/storage-provisioner created
	I0810 22:33:36.065181    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210810223223-30291
	I0810 22:33:36.065209    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:36.065217    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:36.065223    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:36.067823    4347 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:33:36.067848    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:36.067856    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:36.067862    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:36.067868    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:36 GMT
	I0810 22:33:36.067873    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:36.067879    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:36.068359    4347 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210810223223-30291","namespace":"kube-system","uid":"8498143e-4386-44bc-9541-3193bd504c1d","resourceVersion":"297","creationTimestamp":"2021-08-10T22:33:25Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.50.32:2379","kubernetes.io/config.hash":"ee4e4232c1192224bf90edfa1030cde5","kubernetes.io/config.mirror":"ee4e4232c1192224bf90edfa1030cde5","kubernetes.io/config.seen":"2021-08-10T22:33:24.968043630Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:33:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash [truncated 5793 chars]
	I0810 22:33:36.068786    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291
	I0810 22:33:36.068810    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:36.068817    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:36.068825    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:36.072138    4347 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:33:36.072159    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:36.072166    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:36.072172    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:36.072177    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:36.072206    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:36.072212    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:36 GMT
	I0810 22:33:36.072864    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","resourceVersion":"406","creationTimestamp":"2021-08-10T22:33:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223223-30291","minikube.k8s.io/updated_at":"2021_08_10T22_33_20_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22 [truncated 6553 chars]
	I0810 22:33:36.073159    4347 pod_ready.go:102] pod "etcd-multinode-20210810223223-30291" in "kube-system" namespace has status "Ready":"False"
	I0810 22:33:36.081380    4347 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0810 22:33:36.095147    4347 command_runner.go:124] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0810 22:33:36.126134    4347 command_runner.go:124] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0810 22:33:36.169525    4347 command_runner.go:124] > endpoints/k8s.io-minikube-hostpath created
	I0810 22:33:36.196944    4347 command_runner.go:124] > pod/storage-provisioner created
	I0810 22:33:36.201365    4347 command_runner.go:124] > storageclass.storage.k8s.io/standard created
	I0810 22:33:36.201402    4347 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.7596142s)
	I0810 22:33:36.201437    4347 main.go:130] libmachine: Making call to close driver server
	I0810 22:33:36.201458    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .Close
	I0810 22:33:36.201485    4347 command_runner.go:124] > configmap/coredns replaced
	I0810 22:33:36.201524    4347 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.711063544s)
	I0810 22:33:36.201552    4347 start.go:736] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS
	I0810 22:33:36.201774    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | Closing plugin on server side
	I0810 22:33:36.201787    4347 main.go:130] libmachine: Successfully made call to close driver server
	I0810 22:33:36.201805    4347 main.go:130] libmachine: Making call to close connection to plugin binary
	I0810 22:33:36.201815    4347 main.go:130] libmachine: Making call to close driver server
	I0810 22:33:36.201824    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .Close
	I0810 22:33:36.201891    4347 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.833456644s)
	I0810 22:33:36.201918    4347 main.go:130] libmachine: Making call to close driver server
	I0810 22:33:36.201928    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .Close
	I0810 22:33:36.202049    4347 main.go:130] libmachine: Successfully made call to close driver server
	I0810 22:33:36.202065    4347 main.go:130] libmachine: Making call to close connection to plugin binary
	I0810 22:33:36.202078    4347 main.go:130] libmachine: Making call to close driver server
	I0810 22:33:36.202088    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .Close
	I0810 22:33:36.202241    4347 main.go:130] libmachine: Successfully made call to close driver server
	I0810 22:33:36.202286    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | Closing plugin on server side
	I0810 22:33:36.202288    4347 main.go:130] libmachine: Making call to close connection to plugin binary
	I0810 22:33:36.202338    4347 main.go:130] libmachine: Making call to close driver server
	I0810 22:33:36.202349    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .Close
	I0810 22:33:36.202430    4347 main.go:130] libmachine: Successfully made call to close driver server
	I0810 22:33:36.202478    4347 main.go:130] libmachine: Making call to close connection to plugin binary
	I0810 22:33:36.202436    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | Closing plugin on server side
	I0810 22:33:36.202606    4347 main.go:130] libmachine: Successfully made call to close driver server
	I0810 22:33:36.202652    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | Closing plugin on server side
	I0810 22:33:36.202656    4347 main.go:130] libmachine: Making call to close connection to plugin binary
	I0810 22:33:36.205016    4347 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0810 22:33:36.205046    4347 addons.go:344] enableAddons completed in 2.283766308s
	I0810 22:33:36.565144    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210810223223-30291
	I0810 22:33:36.565181    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:36.565188    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:36.565194    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:36.568817    4347 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:33:36.568842    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:36.568849    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:36 GMT
	I0810 22:33:36.568854    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:36.568858    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:36.568863    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:36.568867    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:36.569488    4347 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210810223223-30291","namespace":"kube-system","uid":"8498143e-4386-44bc-9541-3193bd504c1d","resourceVersion":"297","creationTimestamp":"2021-08-10T22:33:25Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.50.32:2379","kubernetes.io/config.hash":"ee4e4232c1192224bf90edfa1030cde5","kubernetes.io/config.mirror":"ee4e4232c1192224bf90edfa1030cde5","kubernetes.io/config.seen":"2021-08-10T22:33:24.968043630Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:33:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash [truncated 5793 chars]
	I0810 22:33:36.569821    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291
	I0810 22:33:36.569835    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:36.569842    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:36.569848    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:36.574427    4347 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0810 22:33:36.574450    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:36.574456    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:36.574461    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:36.574466    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:36.574470    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:36.574475    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:36 GMT
	I0810 22:33:36.575222    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","resourceVersion":"406","creationTimestamp":"2021-08-10T22:33:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223223-30291","minikube.k8s.io/updated_at":"2021_08_10T22_33_20_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22 [truncated 6553 chars]
	I0810 22:33:37.064904    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210810223223-30291
	I0810 22:33:37.064934    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:37.064941    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:37.064946    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:37.069230    4347 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0810 22:33:37.069248    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:37.069259    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:37.069265    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:37.069270    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:37.069274    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:37.069278    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:37 GMT
	I0810 22:33:37.069958    4347 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210810223223-30291","namespace":"kube-system","uid":"8498143e-4386-44bc-9541-3193bd504c1d","resourceVersion":"297","creationTimestamp":"2021-08-10T22:33:25Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.50.32:2379","kubernetes.io/config.hash":"ee4e4232c1192224bf90edfa1030cde5","kubernetes.io/config.mirror":"ee4e4232c1192224bf90edfa1030cde5","kubernetes.io/config.seen":"2021-08-10T22:33:24.968043630Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:33:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash [truncated 5793 chars]
	I0810 22:33:37.070283    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291
	I0810 22:33:37.070296    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:37.070302    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:37.070306    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:37.073821    4347 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:33:37.073837    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:37.073843    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:37.073848    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:37.073852    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:37.073856    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:37.073860    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:37 GMT
	I0810 22:33:37.074059    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","resourceVersion":"406","creationTimestamp":"2021-08-10T22:33:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223223-30291","minikube.k8s.io/updated_at":"2021_08_10T22_33_20_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22 [truncated 6553 chars]
	I0810 22:33:37.564822    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210810223223-30291
	I0810 22:33:37.564865    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:37.564878    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:37.564886    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:37.568039    4347 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:33:37.568056    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:37.568064    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:37.568069    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:37 GMT
	I0810 22:33:37.568074    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:37.568079    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:37.568083    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:37.568598    4347 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210810223223-30291","namespace":"kube-system","uid":"8498143e-4386-44bc-9541-3193bd504c1d","resourceVersion":"297","creationTimestamp":"2021-08-10T22:33:25Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.50.32:2379","kubernetes.io/config.hash":"ee4e4232c1192224bf90edfa1030cde5","kubernetes.io/config.mirror":"ee4e4232c1192224bf90edfa1030cde5","kubernetes.io/config.seen":"2021-08-10T22:33:24.968043630Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:33:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash [truncated 5793 chars]
	I0810 22:33:37.568987    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291
	I0810 22:33:37.569005    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:37.569011    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:37.569014    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:37.572787    4347 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:33:37.572799    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:37.572802    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:37.572806    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:37.572809    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:37.572812    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:37.572816    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:37 GMT
	I0810 22:33:37.573277    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","resourceVersion":"406","creationTimestamp":"2021-08-10T22:33:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223223-30291","minikube.k8s.io/updated_at":"2021_08_10T22_33_20_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22 [truncated 6553 chars]
	I0810 22:33:38.064889    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210810223223-30291
	I0810 22:33:38.064914    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:38.064921    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:38.064925    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:38.071370    4347 round_trippers.go:457] Response Status: 200 OK in 6 milliseconds
	I0810 22:33:38.071391    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:38.071397    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:38 GMT
	I0810 22:33:38.071402    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:38.071407    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:38.071411    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:38.071416    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:38.071608    4347 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210810223223-30291","namespace":"kube-system","uid":"8498143e-4386-44bc-9541-3193bd504c1d","resourceVersion":"481","creationTimestamp":"2021-08-10T22:33:25Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.50.32:2379","kubernetes.io/config.hash":"ee4e4232c1192224bf90edfa1030cde5","kubernetes.io/config.mirror":"ee4e4232c1192224bf90edfa1030cde5","kubernetes.io/config.seen":"2021-08-10T22:33:24.968043630Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:33:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash [truncated 5792 chars]
	I0810 22:33:38.071922    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291
	I0810 22:33:38.071936    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:38.071941    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:38.071945    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:38.080215    4347 round_trippers.go:457] Response Status: 200 OK in 8 milliseconds
	I0810 22:33:38.080236    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:38.080242    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:38 GMT
	I0810 22:33:38.080245    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:38.080248    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:38.080251    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:38.080254    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:38.081469    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","resourceVersion":"406","creationTimestamp":"2021-08-10T22:33:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223223-30291","minikube.k8s.io/updated_at":"2021_08_10T22_33_20_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22 [truncated 6553 chars]
	I0810 22:33:38.081711    4347 pod_ready.go:102] pod "etcd-multinode-20210810223223-30291" in "kube-system" namespace has status "Ready":"False"
	I0810 22:33:38.565120    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210810223223-30291
	I0810 22:33:38.565144    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:38.565150    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:38.565157    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:38.568736    4347 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:33:38.568759    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:38.568765    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:38.568768    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:38 GMT
	I0810 22:33:38.568771    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:38.568775    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:38.568778    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:38.568867    4347 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210810223223-30291","namespace":"kube-system","uid":"8498143e-4386-44bc-9541-3193bd504c1d","resourceVersion":"489","creationTimestamp":"2021-08-10T22:33:25Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.50.32:2379","kubernetes.io/config.hash":"ee4e4232c1192224bf90edfa1030cde5","kubernetes.io/config.mirror":"ee4e4232c1192224bf90edfa1030cde5","kubernetes.io/config.seen":"2021-08-10T22:33:24.968043630Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:33:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash [truncated 5569 chars]
	I0810 22:33:38.569211    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291
	I0810 22:33:38.569227    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:38.569232    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:38.569236    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:38.571624    4347 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:33:38.571642    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:38.571647    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:38.571652    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:38.571657    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:38.571661    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:38.571666    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:38 GMT
	I0810 22:33:38.571821    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","resourceVersion":"406","creationTimestamp":"2021-08-10T22:33:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223223-30291","minikube.k8s.io/updated_at":"2021_08_10T22_33_20_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22 [truncated 6553 chars]
	I0810 22:33:38.572071    4347 pod_ready.go:92] pod "etcd-multinode-20210810223223-30291" in "kube-system" namespace has status "Ready":"True"
	I0810 22:33:38.572090    4347 pod_ready.go:81] duration metric: took 4.563191922s waiting for pod "etcd-multinode-20210810223223-30291" in "kube-system" namespace to be "Ready" ...
	I0810 22:33:38.572106    4347 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-20210810223223-30291" in "kube-system" namespace to be "Ready" ...
	I0810 22:33:38.572193    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20210810223223-30291
	I0810 22:33:38.572205    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:38.572211    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:38.572215    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:38.573981    4347 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:33:38.573997    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:38.574003    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:38.574008    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:38.574012    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:38.574016    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:38.574020    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:38 GMT
	I0810 22:33:38.574199    4347 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20210810223223-30291","namespace":"kube-system","uid":"1c83d52d-8a08-42be-9c8a-6420a1bdb75c","resourceVersion":"317","creationTimestamp":"2021-08-10T22:33:17Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.50.32:8443","kubernetes.io/config.hash":"9099813bef5425d688516ac434247f4d","kubernetes.io/config.mirror":"9099813bef5425d688516ac434247f4d","kubernetes.io/config.seen":"2021-08-10T22:33:07.454085484Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:33:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address [truncated 7249 chars]
	I0810 22:33:38.574503    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291
	I0810 22:33:38.574515    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:38.574520    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:38.574524    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:38.576855    4347 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:33:38.576871    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:38.576877    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:38.576882    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:38.576886    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:38.576891    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:38.576895    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:38 GMT
	I0810 22:33:38.577455    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","resourceVersion":"406","creationTimestamp":"2021-08-10T22:33:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223223-30291","minikube.k8s.io/updated_at":"2021_08_10T22_33_20_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22 [truncated 6553 chars]
	I0810 22:33:38.577692    4347 pod_ready.go:92] pod "kube-apiserver-multinode-20210810223223-30291" in "kube-system" namespace has status "Ready":"True"
	I0810 22:33:38.577708    4347 pod_ready.go:81] duration metric: took 5.570205ms waiting for pod "kube-apiserver-multinode-20210810223223-30291" in "kube-system" namespace to be "Ready" ...
	I0810 22:33:38.577721    4347 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-20210810223223-30291" in "kube-system" namespace to be "Ready" ...
	I0810 22:33:38.577772    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210810223223-30291
	I0810 22:33:38.577783    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:38.577790    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:38.577795    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:38.579574    4347 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:33:38.579590    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:38.579596    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:38 GMT
	I0810 22:33:38.579600    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:38.579605    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:38.579609    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:38.579614    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:38.579888    4347 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20210810223223-30291","namespace":"kube-system","uid":"9305e895-2f70-44a4-8319-6f50b7e7a0ce","resourceVersion":"456","creationTimestamp":"2021-08-10T22:33:25Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"77761625d867cf54e5130d9def04b55c","kubernetes.io/config.mirror":"77761625d867cf54e5130d9def04b55c","kubernetes.io/config.seen":"2021-08-10T22:33:24.968061293Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:33:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi
g.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config.s [truncated 6810 chars]
	I0810 22:33:38.580216    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291
	I0810 22:33:38.580229    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:38.580235    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:38.580239    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:38.581845    4347 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:33:38.581859    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:38.581865    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:38.581870    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:38.581875    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:38.581880    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:38.581884    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:38 GMT
	I0810 22:33:38.582065    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","resourceVersion":"406","creationTimestamp":"2021-08-10T22:33:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223223-30291","minikube.k8s.io/updated_at":"2021_08_10T22_33_20_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22 [truncated 6553 chars]
	I0810 22:33:38.582374    4347 pod_ready.go:92] pod "kube-controller-manager-multinode-20210810223223-30291" in "kube-system" namespace has status "Ready":"True"
	I0810 22:33:38.582395    4347 pod_ready.go:81] duration metric: took 4.663344ms waiting for pod "kube-controller-manager-multinode-20210810223223-30291" in "kube-system" namespace to be "Ready" ...
	I0810 22:33:38.582408    4347 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lmhw9" in "kube-system" namespace to be "Ready" ...
	I0810 22:33:38.582473    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lmhw9
	I0810 22:33:38.582484    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:38.582490    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:38.582498    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:38.584255    4347 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:33:38.584271    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:38.584275    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:38.584279    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:38.584282    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:38.584284    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:38.584287    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:38 GMT
	I0810 22:33:38.584565    4347 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lmhw9","generateName":"kube-proxy-","namespace":"kube-system","uid":"2a10306d-93c9-4aac-b47a-8bd1d406882c","resourceVersion":"470","creationTimestamp":"2021-08-10T22:33:33Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"3eb0224f-214a-4d5e-ba63-b7b722448d21","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:33:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3eb0224f-214a-4d5e-ba63-b7b722448d21\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 5758 chars]
	I0810 22:33:38.584865    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291
	I0810 22:33:38.584880    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:38.584887    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:38.584893    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:38.587199    4347 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:33:38.587215    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:38.587221    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:38.587226    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:38.587230    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:38.587234    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:38.587239    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:38 GMT
	I0810 22:33:38.587537    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","resourceVersion":"406","creationTimestamp":"2021-08-10T22:33:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223223-30291","minikube.k8s.io/updated_at":"2021_08_10T22_33_20_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22 [truncated 6553 chars]
	I0810 22:33:38.587818    4347 pod_ready.go:92] pod "kube-proxy-lmhw9" in "kube-system" namespace has status "Ready":"True"
	I0810 22:33:38.587832    4347 pod_ready.go:81] duration metric: took 5.405358ms waiting for pod "kube-proxy-lmhw9" in "kube-system" namespace to be "Ready" ...
	I0810 22:33:38.587843    4347 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-20210810223223-30291" in "kube-system" namespace to be "Ready" ...
	I0810 22:33:38.587898    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210810223223-30291
	I0810 22:33:38.587908    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:38.587913    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:38.587917    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:38.589724    4347 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:33:38.589763    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:38.589769    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:38 GMT
	I0810 22:33:38.589774    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:38.589779    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:38.589783    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:38.589788    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:38.589907    4347 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20210810223223-30291","namespace":"kube-system","uid":"5a7e6aa0-3e54-4877-a2b0-79df1e84d9f7","resourceVersion":"295","creationTimestamp":"2021-08-10T22:33:25Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"18b06801ebf2048d768b73e098da8a40","kubernetes.io/config.mirror":"18b06801ebf2048d768b73e098da8a40","kubernetes.io/config.seen":"2021-08-10T22:33:24.968063579Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:33:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:ku
bernetes.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labe [truncated 4540 chars]
	I0810 22:33:38.590230    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291
	I0810 22:33:38.590250    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:38.590256    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:38.590262    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:38.596540    4347 round_trippers.go:457] Response Status: 200 OK in 6 milliseconds
	I0810 22:33:38.596556    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:38.596562    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:38 GMT
	I0810 22:33:38.596567    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:38.596571    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:38.596575    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:38.596579    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:38.596674    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","resourceVersion":"406","creationTimestamp":"2021-08-10T22:33:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223223-30291","minikube.k8s.io/updated_at":"2021_08_10T22_33_20_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22 [truncated 6553 chars]
	I0810 22:33:38.596896    4347 pod_ready.go:92] pod "kube-scheduler-multinode-20210810223223-30291" in "kube-system" namespace has status "Ready":"True"
	I0810 22:33:38.596908    4347 pod_ready.go:81] duration metric: took 9.05652ms waiting for pod "kube-scheduler-multinode-20210810223223-30291" in "kube-system" namespace to be "Ready" ...
	I0810 22:33:38.596918    4347 pod_ready.go:38] duration metric: took 4.61496147s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0810 22:33:38.596944    4347 api_server.go:50] waiting for apiserver process to appear ...
	I0810 22:33:38.596999    4347 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0810 22:33:38.613458    4347 command_runner.go:124] > 2634
	I0810 22:33:38.614230    4347 api_server.go:70] duration metric: took 4.692982146s to wait for apiserver process to appear ...
	I0810 22:33:38.614252    4347 api_server.go:86] waiting for apiserver healthz status ...
	I0810 22:33:38.614264    4347 api_server.go:239] Checking apiserver healthz at https://192.168.50.32:8443/healthz ...
	I0810 22:33:38.620248    4347 api_server.go:265] https://192.168.50.32:8443/healthz returned 200:
	ok
	I0810 22:33:38.620382    4347 round_trippers.go:432] GET https://192.168.50.32:8443/version?timeout=32s
	I0810 22:33:38.620394    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:38.620401    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:38.620406    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:38.621322    4347 round_trippers.go:457] Response Status: 200 OK in 0 milliseconds
	I0810 22:33:38.621338    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:38.621344    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:38 GMT
	I0810 22:33:38.621348    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:38.621353    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:38.621357    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:38.621361    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:38.621366    4347 round_trippers.go:463]     Content-Length: 263
	I0810 22:33:38.621486    4347 request.go:1123] Response Body: {
	  "major": "1",
	  "minor": "21",
	  "gitVersion": "v1.21.3",
	  "gitCommit": "ca643a4d1f7bfe34773c74f79527be4afd95bf39",
	  "gitTreeState": "clean",
	  "buildDate": "2021-07-15T20:59:07Z",
	  "goVersion": "go1.16.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0810 22:33:38.621593    4347 api_server.go:139] control plane version: v1.21.3
	I0810 22:33:38.621612    4347 api_server.go:129] duration metric: took 7.353449ms to wait for apiserver health ...
	I0810 22:33:38.621621    4347 system_pods.go:43] waiting for kube-system pods to appear ...
	I0810 22:33:38.765161    4347 request.go:600] Waited for 143.458968ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods
	I0810 22:33:38.765219    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods
	I0810 22:33:38.765225    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:38.765230    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:38.765235    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:38.773379    4347 round_trippers.go:457] Response Status: 200 OK in 8 milliseconds
	I0810 22:33:38.773421    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:38.773428    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:38 GMT
	I0810 22:33:38.773433    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:38.773436    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:38.773439    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:38.773442    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:38.776769    4347 request.go:1123] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"492"},"items":[{"metadata":{"name":"coredns-558bd4d5db-v7x6p","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"0c4eb44b-9d97-4934-aa16-8b8625bf04cf","resourceVersion":"490","creationTimestamp":"2021-08-10T22:33:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"28236a2d-7d69-4771-a778-5fae1cd7d05f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:33:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"28236a2d-7d69-4771-a778-5fae1cd7d05f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller"
:{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k: [truncated 53135 chars]
	I0810 22:33:38.777944    4347 system_pods.go:59] 8 kube-system pods found
	I0810 22:33:38.777979    4347 system_pods.go:61] "coredns-558bd4d5db-v7x6p" [0c4eb44b-9d97-4934-aa16-8b8625bf04cf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0810 22:33:38.777995    4347 system_pods.go:61] "etcd-multinode-20210810223223-30291" [8498143e-4386-44bc-9541-3193bd504c1d] Running
	I0810 22:33:38.778003    4347 system_pods.go:61] "kindnet-2bvdc" [c26b9021-1d86-475c-ac98-6f7e7e07c434] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0810 22:33:38.778010    4347 system_pods.go:61] "kube-apiserver-multinode-20210810223223-30291" [1c83d52d-8a08-42be-9c8a-6420a1bdb75c] Running
	I0810 22:33:38.778014    4347 system_pods.go:61] "kube-controller-manager-multinode-20210810223223-30291" [9305e895-2f70-44a4-8319-6f50b7e7a0ce] Running
	I0810 22:33:38.778018    4347 system_pods.go:61] "kube-proxy-lmhw9" [2a10306d-93c9-4aac-b47a-8bd1d406882c] Running
	I0810 22:33:38.778022    4347 system_pods.go:61] "kube-scheduler-multinode-20210810223223-30291" [5a7e6aa0-3e54-4877-a2b0-79df1e84d9f7] Running
	I0810 22:33:38.778029    4347 system_pods.go:61] "storage-provisioner" [af946d1d-fa19-47fa-8c83-fd1d06a0e788] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0810 22:33:38.778034    4347 system_pods.go:74] duration metric: took 156.4074ms to wait for pod list to return data ...
	I0810 22:33:38.778043    4347 default_sa.go:34] waiting for default service account to be created ...
	I0810 22:33:38.965387    4347 request.go:600] Waited for 187.27479ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.50.32:8443/api/v1/namespaces/default/serviceaccounts
	I0810 22:33:38.965461    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/namespaces/default/serviceaccounts
	I0810 22:33:38.965467    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:38.965472    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:38.965476    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:38.968510    4347 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:33:38.968533    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:38.968546    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:38.968550    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:38.968553    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:38.968557    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:38.968560    4347 round_trippers.go:463]     Content-Length: 304
	I0810 22:33:38.968566    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:38 GMT
	I0810 22:33:38.968584    4347 request.go:1123] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"492"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"9c37cf8a-c605-4344-9101-164c34e1b236","resourceVersion":"394","creationTimestamp":"2021-08-10T22:33:33Z"},"secrets":[{"name":"default-token-pfsbc"}]}]}
	I0810 22:33:38.969287    4347 default_sa.go:45] found service account: "default"
	I0810 22:33:38.969308    4347 default_sa.go:55] duration metric: took 191.259057ms for default service account to be created ...
	I0810 22:33:38.969315    4347 system_pods.go:116] waiting for k8s-apps to be running ...
	I0810 22:33:39.165137    4347 request.go:600] Waited for 195.745514ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods
	I0810 22:33:39.165213    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods
	I0810 22:33:39.165219    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:39.165224    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:39.165228    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:39.168975    4347 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:33:39.168994    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:39.169000    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:39 GMT
	I0810 22:33:39.169004    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:39.169007    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:39.169010    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:39.169013    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:39.170845    4347 request.go:1123] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"497"},"items":[{"metadata":{"name":"coredns-558bd4d5db-v7x6p","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"0c4eb44b-9d97-4934-aa16-8b8625bf04cf","resourceVersion":"493","creationTimestamp":"2021-08-10T22:33:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"28236a2d-7d69-4771-a778-5fae1cd7d05f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:33:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"28236a2d-7d69-4771-a778-5fae1cd7d05f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller"
:{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k: [truncated 52906 chars]
	I0810 22:33:39.172108    4347 system_pods.go:86] 8 kube-system pods found
	I0810 22:33:39.172151    4347 system_pods.go:89] "coredns-558bd4d5db-v7x6p" [0c4eb44b-9d97-4934-aa16-8b8625bf04cf] Running
	I0810 22:33:39.172160    4347 system_pods.go:89] "etcd-multinode-20210810223223-30291" [8498143e-4386-44bc-9541-3193bd504c1d] Running
	I0810 22:33:39.172168    4347 system_pods.go:89] "kindnet-2bvdc" [c26b9021-1d86-475c-ac98-6f7e7e07c434] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0810 22:33:39.172180    4347 system_pods.go:89] "kube-apiserver-multinode-20210810223223-30291" [1c83d52d-8a08-42be-9c8a-6420a1bdb75c] Running
	I0810 22:33:39.172185    4347 system_pods.go:89] "kube-controller-manager-multinode-20210810223223-30291" [9305e895-2f70-44a4-8319-6f50b7e7a0ce] Running
	I0810 22:33:39.172188    4347 system_pods.go:89] "kube-proxy-lmhw9" [2a10306d-93c9-4aac-b47a-8bd1d406882c] Running
	I0810 22:33:39.172192    4347 system_pods.go:89] "kube-scheduler-multinode-20210810223223-30291" [5a7e6aa0-3e54-4877-a2b0-79df1e84d9f7] Running
	I0810 22:33:39.172201    4347 system_pods.go:89] "storage-provisioner" [af946d1d-fa19-47fa-8c83-fd1d06a0e788] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0810 22:33:39.172227    4347 system_pods.go:126] duration metric: took 202.907364ms to wait for k8s-apps to be running ...
	I0810 22:33:39.172234    4347 system_svc.go:44] waiting for kubelet service to be running ....
	I0810 22:33:39.172279    4347 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0810 22:33:39.184422    4347 system_svc.go:56] duration metric: took 12.179239ms WaitForService to wait for kubelet.
	I0810 22:33:39.184441    4347 kubeadm.go:547] duration metric: took 5.263197859s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0810 22:33:39.184465    4347 node_conditions.go:102] verifying NodePressure condition ...
	I0810 22:33:39.365877    4347 request.go:600] Waited for 181.331919ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.50.32:8443/api/v1/nodes
	I0810 22:33:39.365947    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes
	I0810 22:33:39.365955    4347 round_trippers.go:438] Request Headers:
	I0810 22:33:39.365962    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:33:39.365977    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:33:39.369263    4347 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:33:39.369285    4347 round_trippers.go:460] Response Headers:
	I0810 22:33:39.369291    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:33:39.369294    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:33:39.369297    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:33:39.369300    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:33:39.369303    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:33:39 GMT
	I0810 22:33:39.369780    4347 request.go:1123] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"497"},"items":[{"metadata":{"name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","resourceVersion":"406","creationTimestamp":"2021-08-10T22:33:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223223-30291","minikube.k8s.io/updated_at":"2021_08_10T22_33_20_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed
-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operatio [truncated 6606 chars]
	I0810 22:33:39.370809    4347 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0810 22:33:39.370836    4347 node_conditions.go:123] node cpu capacity is 2
	I0810 22:33:39.370896    4347 node_conditions.go:105] duration metric: took 186.425566ms to run NodePressure ...
	I0810 22:33:39.370910    4347 start.go:231] waiting for startup goroutines ...
	I0810 22:33:39.373183    4347 out.go:177] 
	I0810 22:33:39.373458    4347 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/config.json ...
	I0810 22:33:39.375290    4347 out.go:177] * Starting node multinode-20210810223223-30291-m02 in cluster multinode-20210810223223-30291
	I0810 22:33:39.375311    4347 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0810 22:33:39.375324    4347 cache.go:56] Caching tarball of preloaded images
	I0810 22:33:39.375468    4347 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0810 22:33:39.375488    4347 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on crio
	I0810 22:33:39.375558    4347 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/config.json ...
	I0810 22:33:39.375692    4347 cache.go:205] Successfully downloaded all kic artifacts
	I0810 22:33:39.375716    4347 start.go:313] acquiring machines lock for multinode-20210810223223-30291-m02: {Name:mk9647f7c84b24381af0d3e731fd883065efc3b8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0810 22:33:39.375768    4347 start.go:317] acquired machines lock for "multinode-20210810223223-30291-m02" in 38.125µs
	I0810 22:33:39.375787    4347 start.go:89] Provisioning new machine with config: &{Name:multinode-20210810223223-30291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12122/minikube-v1.22.0-1628238775-12122.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 C
lusterName:multinode-20210810223223-30291 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.32 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.21.3 ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0} &{Name:m02 IP: Port:0 KubernetesVersion:v1.21.3 ControlPlane:false Wo
rker:true}
	I0810 22:33:39.375843    4347 start.go:126] createHost starting for "m02" (driver="kvm2")
	I0810 22:33:39.377535    4347 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0810 22:33:39.377656    4347 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:33:39.377692    4347 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:33:39.389071    4347 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:38337
	I0810 22:33:39.389528    4347 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:33:39.390013    4347 main.go:130] libmachine: Using API Version  1
	I0810 22:33:39.390036    4347 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:33:39.390344    4347 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:33:39.390529    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetMachineName
	I0810 22:33:39.390661    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .DriverName
	I0810 22:33:39.390822    4347 start.go:160] libmachine.API.Create for "multinode-20210810223223-30291" (driver="kvm2")
	I0810 22:33:39.390850    4347 client.go:168] LocalClient.Create starting
	I0810 22:33:39.390876    4347 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem
	I0810 22:33:39.390902    4347 main.go:130] libmachine: Decoding PEM data...
	I0810 22:33:39.390921    4347 main.go:130] libmachine: Parsing certificate...
	I0810 22:33:39.391039    4347 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem
	I0810 22:33:39.391056    4347 main.go:130] libmachine: Decoding PEM data...
	I0810 22:33:39.391067    4347 main.go:130] libmachine: Parsing certificate...
	I0810 22:33:39.391123    4347 main.go:130] libmachine: Running pre-create checks...
	I0810 22:33:39.391136    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .PreCreateCheck
	I0810 22:33:39.391310    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetConfigRaw
	I0810 22:33:39.391764    4347 main.go:130] libmachine: Creating machine...
	I0810 22:33:39.391779    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .Create
	I0810 22:33:39.391915    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Creating KVM machine...
	I0810 22:33:39.394529    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | found existing default KVM network
	I0810 22:33:39.394714    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | found existing private KVM network mk-multinode-20210810223223-30291
	I0810 22:33:39.394802    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Setting up store path in /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223223-30291-m02 ...
	I0810 22:33:39.394828    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Building disk image from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/iso/minikube-v1.22.0-1628238775-12122.iso
	I0810 22:33:39.394910    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | I0810 22:33:39.394787    4625 common.go:108] Making disk image using store path: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	I0810 22:33:39.394996    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Downloading /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/iso/minikube-v1.22.0-1628238775-12122.iso...
	I0810 22:33:39.591851    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | I0810 22:33:39.591730    4625 common.go:115] Creating ssh key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223223-30291-m02/id_rsa...
	I0810 22:33:39.872490    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | I0810 22:33:39.872342    4625 common.go:121] Creating raw disk image: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223223-30291-m02/multinode-20210810223223-30291-m02.rawdisk...
	I0810 22:33:39.872536    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | Writing magic tar header
	I0810 22:33:39.872605    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | Writing SSH key tar header
	I0810 22:33:39.872662    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | I0810 22:33:39.872483    4625 common.go:135] Fixing permissions on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223223-30291-m02 ...
	I0810 22:33:39.872729    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223223-30291-m02
	I0810 22:33:39.872765    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223223-30291-m02 (perms=drwx------)
	I0810 22:33:39.872792    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines (perms=drwxr-xr-x)
	I0810 22:33:39.872819    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines
	I0810 22:33:39.872840    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube (perms=drwxr-xr-x)
	I0810 22:33:39.872862    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0 (perms=drwxr-xr-x)
	I0810 22:33:39.872879    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxr-xr-x)
	I0810 22:33:39.872894    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0810 22:33:39.872910    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	I0810 22:33:39.872930    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0
	I0810 22:33:39.872956    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0810 22:33:39.872969    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Creating domain...
	I0810 22:33:39.872990    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | Checking permissions on dir: /home/jenkins
	I0810 22:33:39.873011    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | Checking permissions on dir: /home
	I0810 22:33:39.873029    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | Skipping /home - not owner
	I0810 22:33:39.897657    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:fb:77:5c in network default
	I0810 22:33:39.898150    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Ensuring networks are active...
	I0810 22:33:39.898180    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:39.900225    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Ensuring network default is active
	I0810 22:33:39.900536    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Ensuring network mk-multinode-20210810223223-30291 is active
	I0810 22:33:39.900871    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Getting domain xml...
	I0810 22:33:39.902635    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Creating domain...
	I0810 22:33:40.317605    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Waiting to get IP...
	I0810 22:33:40.318388    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:40.318898    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | unable to find current IP address of domain multinode-20210810223223-30291-m02 in network mk-multinode-20210810223223-30291
	I0810 22:33:40.318927    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | I0810 22:33:40.318855    4625 retry.go:31] will retry after 263.082536ms: waiting for machine to come up
	I0810 22:33:40.583240    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:40.583817    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | unable to find current IP address of domain multinode-20210810223223-30291-m02 in network mk-multinode-20210810223223-30291
	I0810 22:33:40.583844    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | I0810 22:33:40.583766    4625 retry.go:31] will retry after 381.329545ms: waiting for machine to come up
	I0810 22:33:40.966355    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:40.966758    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | unable to find current IP address of domain multinode-20210810223223-30291-m02 in network mk-multinode-20210810223223-30291
	I0810 22:33:40.966780    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | I0810 22:33:40.966730    4625 retry.go:31] will retry after 422.765636ms: waiting for machine to come up
	I0810 22:33:41.391252    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:41.391751    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | unable to find current IP address of domain multinode-20210810223223-30291-m02 in network mk-multinode-20210810223223-30291
	I0810 22:33:41.391784    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | I0810 22:33:41.391698    4625 retry.go:31] will retry after 473.074753ms: waiting for machine to come up
	I0810 22:33:41.866200    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:41.866699    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | unable to find current IP address of domain multinode-20210810223223-30291-m02 in network mk-multinode-20210810223223-30291
	I0810 22:33:41.866723    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | I0810 22:33:41.866659    4625 retry.go:31] will retry after 587.352751ms: waiting for machine to come up
	I0810 22:33:42.455304    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:42.455729    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | unable to find current IP address of domain multinode-20210810223223-30291-m02 in network mk-multinode-20210810223223-30291
	I0810 22:33:42.455757    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | I0810 22:33:42.455675    4625 retry.go:31] will retry after 834.206799ms: waiting for machine to come up
	I0810 22:33:43.291548    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:43.292039    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | unable to find current IP address of domain multinode-20210810223223-30291-m02 in network mk-multinode-20210810223223-30291
	I0810 22:33:43.292066    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | I0810 22:33:43.291996    4625 retry.go:31] will retry after 746.553905ms: waiting for machine to come up
	I0810 22:33:44.039818    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:44.040291    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | unable to find current IP address of domain multinode-20210810223223-30291-m02 in network mk-multinode-20210810223223-30291
	I0810 22:33:44.040323    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | I0810 22:33:44.040247    4625 retry.go:31] will retry after 987.362415ms: waiting for machine to come up
	I0810 22:33:45.028879    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:45.029382    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | unable to find current IP address of domain multinode-20210810223223-30291-m02 in network mk-multinode-20210810223223-30291
	I0810 22:33:45.029407    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | I0810 22:33:45.029327    4625 retry.go:31] will retry after 1.189835008s: waiting for machine to come up
	I0810 22:33:46.220158    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:46.220603    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | unable to find current IP address of domain multinode-20210810223223-30291-m02 in network mk-multinode-20210810223223-30291
	I0810 22:33:46.220627    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | I0810 22:33:46.220559    4625 retry.go:31] will retry after 1.677229867s: waiting for machine to come up
	I0810 22:33:47.900417    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:47.900951    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | unable to find current IP address of domain multinode-20210810223223-30291-m02 in network mk-multinode-20210810223223-30291
	I0810 22:33:47.900989    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | I0810 22:33:47.900884    4625 retry.go:31] will retry after 2.346016261s: waiting for machine to come up
	I0810 22:33:50.247928    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:50.248472    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | unable to find current IP address of domain multinode-20210810223223-30291-m02 in network mk-multinode-20210810223223-30291
	I0810 22:33:50.248503    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | I0810 22:33:50.248417    4625 retry.go:31] will retry after 3.36678925s: waiting for machine to come up
	I0810 22:33:53.618810    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:53.619341    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Found IP for machine: 192.168.50.251
	I0810 22:33:53.619376    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has current primary IP address 192.168.50.251 and MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:53.619392    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Reserving static IP address...
	I0810 22:33:53.619739    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | unable to find host DHCP lease matching {name: "multinode-20210810223223-30291-m02", mac: "52:54:00:5f:3c:a9", ip: "192.168.50.251"} in network mk-multinode-20210810223223-30291
	I0810 22:33:53.667014    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | Getting to WaitForSSH function...
	I0810 22:33:53.667075    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Reserved static IP address: 192.168.50.251
	I0810 22:33:53.667092    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Waiting for SSH to be available...
	I0810 22:33:53.671847    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:53.672357    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:3c:a9", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:33:53 +0000 UTC Type:0 Mac:52:54:00:5f:3c:a9 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5f:3c:a9}
	I0810 22:33:53.672384    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined IP address 192.168.50.251 and MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:53.672460    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | Using SSH client type: external
	I0810 22:33:53.672489    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223223-30291-m02/id_rsa (-rw-------)
	I0810 22:33:53.672572    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.251 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223223-30291-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0810 22:33:53.672598    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | About to run SSH command:
	I0810 22:33:53.672613    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | exit 0
	I0810 22:33:53.811241    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | SSH cmd err, output: <nil>: 
	I0810 22:33:53.811682    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) KVM machine creation complete!
	I0810 22:33:53.811775    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetConfigRaw
	I0810 22:33:53.812368    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .DriverName
	I0810 22:33:53.812553    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .DriverName
	I0810 22:33:53.812713    4347 main.go:130] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0810 22:33:53.812732    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetState
	I0810 22:33:53.815427    4347 main.go:130] libmachine: Detecting operating system of created instance...
	I0810 22:33:53.815441    4347 main.go:130] libmachine: Waiting for SSH to be available...
	I0810 22:33:53.815449    4347 main.go:130] libmachine: Getting to WaitForSSH function...
	I0810 22:33:53.815455    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHHostname
	I0810 22:33:53.819909    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:53.820253    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:3c:a9", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:33:53 +0000 UTC Type:0 Mac:52:54:00:5f:3c:a9 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:multinode-20210810223223-30291-m02 Clientid:01:52:54:00:5f:3c:a9}
	I0810 22:33:53.820275    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined IP address 192.168.50.251 and MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:53.820397    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHPort
	I0810 22:33:53.820569    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHKeyPath
	I0810 22:33:53.820705    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHKeyPath
	I0810 22:33:53.820803    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHUsername
	I0810 22:33:53.820982    4347 main.go:130] libmachine: Using SSH client type: native
	I0810 22:33:53.821136    4347 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0810 22:33:53.821150    4347 main.go:130] libmachine: About to run SSH command:
	exit 0
	I0810 22:33:53.950995    4347 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0810 22:33:53.951017    4347 main.go:130] libmachine: Detecting the provisioner...
	I0810 22:33:53.951026    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHHostname
	I0810 22:33:53.956198    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:53.956520    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:3c:a9", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:33:53 +0000 UTC Type:0 Mac:52:54:00:5f:3c:a9 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:multinode-20210810223223-30291-m02 Clientid:01:52:54:00:5f:3c:a9}
	I0810 22:33:53.956552    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined IP address 192.168.50.251 and MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:53.956672    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHPort
	I0810 22:33:53.956873    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHKeyPath
	I0810 22:33:53.957045    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHKeyPath
	I0810 22:33:53.957186    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHUsername
	I0810 22:33:53.957299    4347 main.go:130] libmachine: Using SSH client type: native
	I0810 22:33:53.957440    4347 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0810 22:33:53.957451    4347 main.go:130] libmachine: About to run SSH command:
	cat /etc/os-release
	I0810 22:33:54.088979    4347 main.go:130] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2020.02.12
	ID=buildroot
	VERSION_ID=2020.02.12
	PRETTY_NAME="Buildroot 2020.02.12"
	
	I0810 22:33:54.089056    4347 main.go:130] libmachine: found compatible host: buildroot
	I0810 22:33:54.089066    4347 main.go:130] libmachine: Provisioning with buildroot...
	I0810 22:33:54.089075    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetMachineName
	I0810 22:33:54.089329    4347 buildroot.go:166] provisioning hostname "multinode-20210810223223-30291-m02"
	I0810 22:33:54.089358    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetMachineName
	I0810 22:33:54.089535    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHHostname
	I0810 22:33:54.094741    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:54.095121    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:3c:a9", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:33:53 +0000 UTC Type:0 Mac:52:54:00:5f:3c:a9 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:multinode-20210810223223-30291-m02 Clientid:01:52:54:00:5f:3c:a9}
	I0810 22:33:54.095161    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined IP address 192.168.50.251 and MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:54.095272    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHPort
	I0810 22:33:54.095467    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHKeyPath
	I0810 22:33:54.095616    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHKeyPath
	I0810 22:33:54.095736    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHUsername
	I0810 22:33:54.095870    4347 main.go:130] libmachine: Using SSH client type: native
	I0810 22:33:54.096067    4347 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0810 22:33:54.096087    4347 main.go:130] libmachine: About to run SSH command:
	sudo hostname multinode-20210810223223-30291-m02 && echo "multinode-20210810223223-30291-m02" | sudo tee /etc/hostname
	I0810 22:33:54.237962    4347 main.go:130] libmachine: SSH cmd err, output: <nil>: multinode-20210810223223-30291-m02
	
	I0810 22:33:54.237992    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHHostname
	I0810 22:33:54.243272    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:54.243647    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:3c:a9", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:33:53 +0000 UTC Type:0 Mac:52:54:00:5f:3c:a9 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:multinode-20210810223223-30291-m02 Clientid:01:52:54:00:5f:3c:a9}
	I0810 22:33:54.243672    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined IP address 192.168.50.251 and MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:54.243836    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHPort
	I0810 22:33:54.244047    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHKeyPath
	I0810 22:33:54.244207    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHKeyPath
	I0810 22:33:54.244333    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHUsername
	I0810 22:33:54.244485    4347 main.go:130] libmachine: Using SSH client type: native
	I0810 22:33:54.244661    4347 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0810 22:33:54.244686    4347 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-20210810223223-30291-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-20210810223223-30291-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-20210810223223-30291-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0810 22:33:54.382698    4347 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0810 22:33:54.382734    4347 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem ServerCertRemotePath:/etc/
docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube}
	I0810 22:33:54.382752    4347 buildroot.go:174] setting up certificates
	I0810 22:33:54.382761    4347 provision.go:83] configureAuth start
	I0810 22:33:54.382770    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetMachineName
	I0810 22:33:54.383080    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetIP
	I0810 22:33:54.388135    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:54.388480    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:3c:a9", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:33:53 +0000 UTC Type:0 Mac:52:54:00:5f:3c:a9 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:multinode-20210810223223-30291-m02 Clientid:01:52:54:00:5f:3c:a9}
	I0810 22:33:54.388521    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined IP address 192.168.50.251 and MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:54.388699    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHHostname
	I0810 22:33:54.392730    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:54.393072    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:3c:a9", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:33:53 +0000 UTC Type:0 Mac:52:54:00:5f:3c:a9 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:multinode-20210810223223-30291-m02 Clientid:01:52:54:00:5f:3c:a9}
	I0810 22:33:54.393101    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined IP address 192.168.50.251 and MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:54.393214    4347 provision.go:137] copyHostCerts
	I0810 22:33:54.393261    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem
	I0810 22:33:54.393292    4347 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem, removing ...
	I0810 22:33:54.393302    4347 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem
	I0810 22:33:54.393365    4347 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem (1082 bytes)
	I0810 22:33:54.393456    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem
	I0810 22:33:54.393474    4347 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem, removing ...
	I0810 22:33:54.393480    4347 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem
	I0810 22:33:54.393500    4347 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem (1123 bytes)
	I0810 22:33:54.393551    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem
	I0810 22:33:54.393567    4347 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem, removing ...
	I0810 22:33:54.393579    4347 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem
	I0810 22:33:54.393598    4347 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem (1679 bytes)
	I0810 22:33:54.393650    4347 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem org=jenkins.multinode-20210810223223-30291-m02 san=[192.168.50.251 192.168.50.251 localhost 127.0.0.1 minikube multinode-20210810223223-30291-m02]
	I0810 22:33:54.552230    4347 provision.go:171] copyRemoteCerts
	I0810 22:33:54.552289    4347 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0810 22:33:54.552317    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHHostname
	I0810 22:33:54.558060    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:54.558430    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:3c:a9", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:33:53 +0000 UTC Type:0 Mac:52:54:00:5f:3c:a9 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:multinode-20210810223223-30291-m02 Clientid:01:52:54:00:5f:3c:a9}
	I0810 22:33:54.558464    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined IP address 192.168.50.251 and MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:54.558579    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHPort
	I0810 22:33:54.558782    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHKeyPath
	I0810 22:33:54.558948    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHUsername
	I0810 22:33:54.559117    4347 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223223-30291-m02/id_rsa Username:docker}
	I0810 22:33:54.650917    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0810 22:33:54.650988    4347 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0810 22:33:54.667389    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0810 22:33:54.667439    4347 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem --> /etc/docker/server.pem (1273 bytes)
	I0810 22:33:54.683372    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0810 22:33:54.683410    4347 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0810 22:33:54.699647    4347 provision.go:86] duration metric: configureAuth took 316.874754ms
	I0810 22:33:54.699671    4347 buildroot.go:189] setting minikube options for container-runtime
	I0810 22:33:54.699921    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHHostname
	I0810 22:33:54.705184    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:54.705535    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:3c:a9", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:33:53 +0000 UTC Type:0 Mac:52:54:00:5f:3c:a9 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:multinode-20210810223223-30291-m02 Clientid:01:52:54:00:5f:3c:a9}
	I0810 22:33:54.705562    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined IP address 192.168.50.251 and MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:54.705701    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHPort
	I0810 22:33:54.705876    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHKeyPath
	I0810 22:33:54.706040    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHKeyPath
	I0810 22:33:54.706160    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHUsername
	I0810 22:33:54.706302    4347 main.go:130] libmachine: Using SSH client type: native
	I0810 22:33:54.706440    4347 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0810 22:33:54.706456    4347 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0810 22:33:55.299688    4347 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0810 22:33:55.299726    4347 main.go:130] libmachine: Checking connection to Docker...
	I0810 22:33:55.299740    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetURL
	I0810 22:33:55.302601    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | Using libvirt version 3000000
	I0810 22:33:55.307008    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:55.307330    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:3c:a9", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:33:53 +0000 UTC Type:0 Mac:52:54:00:5f:3c:a9 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:multinode-20210810223223-30291-m02 Clientid:01:52:54:00:5f:3c:a9}
	I0810 22:33:55.307361    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined IP address 192.168.50.251 and MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:55.307525    4347 main.go:130] libmachine: Docker is up and running!
	I0810 22:33:55.307546    4347 main.go:130] libmachine: Reticulating splines...
	I0810 22:33:55.307552    4347 client.go:171] LocalClient.Create took 15.916696067s
	I0810 22:33:55.307570    4347 start.go:168] duration metric: libmachine.API.Create for "multinode-20210810223223-30291" took 15.916747981s
	I0810 22:33:55.307583    4347 start.go:267] post-start starting for "multinode-20210810223223-30291-m02" (driver="kvm2")
	I0810 22:33:55.307593    4347 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0810 22:33:55.307616    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .DriverName
	I0810 22:33:55.307845    4347 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0810 22:33:55.307873    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHHostname
	I0810 22:33:55.312135    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:55.312458    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:3c:a9", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:33:53 +0000 UTC Type:0 Mac:52:54:00:5f:3c:a9 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:multinode-20210810223223-30291-m02 Clientid:01:52:54:00:5f:3c:a9}
	I0810 22:33:55.312485    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined IP address 192.168.50.251 and MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:55.312571    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHPort
	I0810 22:33:55.312745    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHKeyPath
	I0810 22:33:55.312906    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHUsername
	I0810 22:33:55.313019    4347 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223223-30291-m02/id_rsa Username:docker}
	I0810 22:33:55.407947    4347 ssh_runner.go:149] Run: cat /etc/os-release
	I0810 22:33:55.412349    4347 command_runner.go:124] > NAME=Buildroot
	I0810 22:33:55.412365    4347 command_runner.go:124] > VERSION=2020.02.12
	I0810 22:33:55.412369    4347 command_runner.go:124] > ID=buildroot
	I0810 22:33:55.412374    4347 command_runner.go:124] > VERSION_ID=2020.02.12
	I0810 22:33:55.412379    4347 command_runner.go:124] > PRETTY_NAME="Buildroot 2020.02.12"
	I0810 22:33:55.412564    4347 info.go:137] Remote host: Buildroot 2020.02.12
	I0810 22:33:55.412587    4347 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/addons for local assets ...
	I0810 22:33:55.412651    4347 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files for local assets ...
	I0810 22:33:55.412752    4347 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/302912.pem -> 302912.pem in /etc/ssl/certs
	I0810 22:33:55.412764    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/302912.pem -> /etc/ssl/certs/302912.pem
	I0810 22:33:55.412859    4347 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0810 22:33:55.419853    4347 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/302912.pem --> /etc/ssl/certs/302912.pem (1708 bytes)
	I0810 22:33:55.438940    4347 start.go:270] post-start completed in 131.338951ms
	I0810 22:33:55.439002    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetConfigRaw
	I0810 22:33:55.439628    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetIP
	I0810 22:33:55.445097    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:55.445462    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:3c:a9", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:33:53 +0000 UTC Type:0 Mac:52:54:00:5f:3c:a9 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:multinode-20210810223223-30291-m02 Clientid:01:52:54:00:5f:3c:a9}
	I0810 22:33:55.445497    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined IP address 192.168.50.251 and MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:55.445725    4347 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/config.json ...
	I0810 22:33:55.445999    4347 start.go:129] duration metric: createHost completed in 16.070145517s
	I0810 22:33:55.446019    4347 start.go:80] releasing machines lock for "multinode-20210810223223-30291-m02", held for 16.070240884s
	I0810 22:33:55.446061    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .DriverName
	I0810 22:33:55.446337    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetIP
	I0810 22:33:55.450828    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:55.451102    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:3c:a9", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:33:53 +0000 UTC Type:0 Mac:52:54:00:5f:3c:a9 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:multinode-20210810223223-30291-m02 Clientid:01:52:54:00:5f:3c:a9}
	I0810 22:33:55.451135    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined IP address 192.168.50.251 and MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:55.453906    4347 out.go:177] * Found network options:
	I0810 22:33:55.455452    4347 out.go:177]   - NO_PROXY=192.168.50.32
	W0810 22:33:55.455496    4347 proxy.go:118] fail to check proxy env: Error ip not in block
	I0810 22:33:55.455548    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .DriverName
	I0810 22:33:55.455726    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .DriverName
	I0810 22:33:55.456208    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .DriverName
	W0810 22:33:55.456389    4347 proxy.go:118] fail to check proxy env: Error ip not in block
	I0810 22:33:55.456435    4347 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0810 22:33:55.456494    4347 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0810 22:33:55.456512    4347 ssh_runner.go:149] Run: sudo crictl images --output json
	I0810 22:33:55.456531    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHHostname
	I0810 22:33:55.456550    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHHostname
	I0810 22:33:55.461340    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:55.461671    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:3c:a9", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:33:53 +0000 UTC Type:0 Mac:52:54:00:5f:3c:a9 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:multinode-20210810223223-30291-m02 Clientid:01:52:54:00:5f:3c:a9}
	I0810 22:33:55.461718    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined IP address 192.168.50.251 and MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:55.461820    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHPort
	I0810 22:33:55.461979    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHKeyPath
	I0810 22:33:55.462124    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHUsername
	I0810 22:33:55.462275    4347 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223223-30291-m02/id_rsa Username:docker}
	I0810 22:33:55.462460    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:55.462811    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:3c:a9", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:33:53 +0000 UTC Type:0 Mac:52:54:00:5f:3c:a9 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:multinode-20210810223223-30291-m02 Clientid:01:52:54:00:5f:3c:a9}
	I0810 22:33:55.462840    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined IP address 192.168.50.251 and MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:33:55.462968    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHPort
	I0810 22:33:55.463133    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHKeyPath
	I0810 22:33:55.463276    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHUsername
	I0810 22:33:55.463416    4347 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223223-30291-m02/id_rsa Username:docker}
	I0810 22:33:55.566399    4347 command_runner.go:124] > <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
	I0810 22:33:55.566432    4347 command_runner.go:124] > <TITLE>302 Moved</TITLE></HEAD><BODY>
	I0810 22:33:55.566445    4347 command_runner.go:124] > <H1>302 Moved</H1>
	I0810 22:33:55.566452    4347 command_runner.go:124] > The document has moved
	I0810 22:33:55.566462    4347 command_runner.go:124] > <A HREF="https://cloud.google.com/container-registry/">here</A>.
	I0810 22:33:55.566468    4347 command_runner.go:124] > </BODY></HTML>
	I0810 22:33:55.576619    4347 command_runner.go:124] ! time="2021-08-10T22:33:55Z" level=warning msg="image connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	I0810 22:33:57.569137    4347 command_runner.go:124] ! time="2021-08-10T22:33:57Z" level=error msg="connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	I0810 22:33:59.562615    4347 command_runner.go:124] ! time="2021-08-10T22:33:59Z" level=error msg="connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	I0810 22:33:59.568157    4347 command_runner.go:124] > {
	I0810 22:33:59.568183    4347 command_runner.go:124] >   "images": [
	I0810 22:33:59.568189    4347 command_runner.go:124] >   ]
	I0810 22:33:59.568194    4347 command_runner.go:124] > }
	I0810 22:33:59.568215    4347 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.111689554s)
	I0810 22:33:59.568250    4347 crio.go:420] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.21.3". assuming images are not preloaded.
	I0810 22:33:59.568314    4347 ssh_runner.go:149] Run: which lz4
	I0810 22:33:59.572726    4347 command_runner.go:124] > /bin/lz4
	I0810 22:33:59.572967    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0810 22:33:59.573046    4347 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0810 22:33:59.577485    4347 command_runner.go:124] ! stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0810 22:33:59.577947    4347 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0810 22:33:59.577980    4347 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (576184326 bytes)
	I0810 22:34:01.734690    4347 crio.go:362] Took 2.161670 seconds to copy over tarball
	I0810 22:34:01.734769    4347 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0810 22:34:06.980628    4347 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (5.245830184s)
	I0810 22:34:06.980669    4347 crio.go:369] Took 5.245944 seconds t extract the tarball
	I0810 22:34:06.980684    4347 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0810 22:34:07.020329    4347 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0810 22:34:07.032869    4347 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0810 22:34:07.043836    4347 docker.go:153] disabling docker service ...
	I0810 22:34:07.043893    4347 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0810 22:34:07.055118    4347 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0810 22:34:07.063888    4347 command_runner.go:124] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0810 22:34:07.064198    4347 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0810 22:34:07.186863    4347 command_runner.go:124] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0810 22:34:07.186945    4347 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0810 22:34:07.324959    4347 command_runner.go:124] ! Unit docker.service does not exist, proceeding anyway.
	I0810 22:34:07.324997    4347 command_runner.go:124] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0810 22:34:07.325069    4347 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0810 22:34:07.336641    4347 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0810 22:34:07.349280    4347 command_runner.go:124] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0810 22:34:07.349301    4347 command_runner.go:124] > image-endpoint: unix:///var/run/crio/crio.sock
	I0810 22:34:07.349743    4347 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0810 22:34:07.357152    4347 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0810 22:34:07.357171    4347 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0810 22:34:07.364618    4347 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0810 22:34:07.370820    4347 command_runner.go:124] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0810 22:34:07.371014    4347 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0810 22:34:07.371064    4347 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0810 22:34:07.387991    4347 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0810 22:34:07.394493    4347 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0810 22:34:07.507775    4347 ssh_runner.go:149] Run: sudo systemctl start crio
	I0810 22:34:07.766969    4347 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0810 22:34:07.767057    4347 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0810 22:34:07.773077    4347 command_runner.go:124] >   File: /var/run/crio/crio.sock
	I0810 22:34:07.773107    4347 command_runner.go:124] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0810 22:34:07.773118    4347 command_runner.go:124] > Device: 14h/20d	Inode: 29756       Links: 1
	I0810 22:34:07.773129    4347 command_runner.go:124] > Access: (0755/srwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0810 22:34:07.773137    4347 command_runner.go:124] > Access: 2021-08-10 22:33:59.536352887 +0000
	I0810 22:34:07.773146    4347 command_runner.go:124] > Modify: 2021-08-10 22:33:55.233621889 +0000
	I0810 22:34:07.773156    4347 command_runner.go:124] > Change: 2021-08-10 22:33:55.233621889 +0000
	I0810 22:34:07.773162    4347 command_runner.go:124] >  Birth: -
	I0810 22:34:07.773277    4347 start.go:417] Will wait 60s for crictl version
	I0810 22:34:07.773351    4347 ssh_runner.go:149] Run: sudo crictl version
	I0810 22:34:07.814140    4347 command_runner.go:124] > Version:  0.1.0
	I0810 22:34:07.814169    4347 command_runner.go:124] > RuntimeName:  cri-o
	I0810 22:34:07.814177    4347 command_runner.go:124] > RuntimeVersion:  1.20.2
	I0810 22:34:07.814185    4347 command_runner.go:124] > RuntimeApiVersion:  v1alpha1
	I0810 22:34:07.814206    4347 start.go:426] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.2
	RuntimeApiVersion:  v1alpha1
	I0810 22:34:07.814280    4347 ssh_runner.go:149] Run: crio --version
	I0810 22:34:08.067036    4347 command_runner.go:124] > crio version 1.20.2
	I0810 22:34:08.067060    4347 command_runner.go:124] > Version:       1.20.2
	I0810 22:34:08.067068    4347 command_runner.go:124] > GitCommit:     d5a999ad0a35d895ded554e1e18c142075501a98
	I0810 22:34:08.067072    4347 command_runner.go:124] > GitTreeState:  clean
	I0810 22:34:08.067079    4347 command_runner.go:124] > BuildDate:     2021-08-06T09:19:16Z
	I0810 22:34:08.067090    4347 command_runner.go:124] > GoVersion:     go1.13.15
	I0810 22:34:08.067094    4347 command_runner.go:124] > Compiler:      gc
	I0810 22:34:08.067099    4347 command_runner.go:124] > Platform:      linux/amd64
	I0810 22:34:08.068506    4347 command_runner.go:124] ! time="2021-08-10T22:34:08Z" level=info msg="Starting CRI-O, version: 1.20.2, git: d5a999ad0a35d895ded554e1e18c142075501a98(clean)"
	I0810 22:34:08.068608    4347 ssh_runner.go:149] Run: crio --version
	I0810 22:34:08.350482    4347 command_runner.go:124] > crio version 1.20.2
	I0810 22:34:08.350507    4347 command_runner.go:124] > Version:       1.20.2
	I0810 22:34:08.350514    4347 command_runner.go:124] > GitCommit:     d5a999ad0a35d895ded554e1e18c142075501a98
	I0810 22:34:08.350519    4347 command_runner.go:124] > GitTreeState:  clean
	I0810 22:34:08.350525    4347 command_runner.go:124] > BuildDate:     2021-08-06T09:19:16Z
	I0810 22:34:08.350529    4347 command_runner.go:124] > GoVersion:     go1.13.15
	I0810 22:34:08.350533    4347 command_runner.go:124] > Compiler:      gc
	I0810 22:34:08.350538    4347 command_runner.go:124] > Platform:      linux/amd64
	I0810 22:34:08.351350    4347 command_runner.go:124] ! time="2021-08-10T22:34:08Z" level=info msg="Starting CRI-O, version: 1.20.2, git: d5a999ad0a35d895ded554e1e18c142075501a98(clean)"
	I0810 22:34:10.019648    4347 out.go:177] * Preparing Kubernetes v1.21.3 on CRI-O 1.20.2 ...
	I0810 22:34:10.105794    4347 out.go:177]   - env NO_PROXY=192.168.50.32
	I0810 22:34:10.105875    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetIP
	I0810 22:34:10.112380    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:34:10.112803    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:3c:a9", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:33:53 +0000 UTC Type:0 Mac:52:54:00:5f:3c:a9 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:multinode-20210810223223-30291-m02 Clientid:01:52:54:00:5f:3c:a9}
	I0810 22:34:10.112844    4347 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined IP address 192.168.50.251 and MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:34:10.113087    4347 ssh_runner.go:149] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0810 22:34:10.118195    4347 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0810 22:34:10.131148    4347 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291 for IP: 192.168.50.251
	I0810 22:34:10.131224    4347 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key
	I0810 22:34:10.131247    4347 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key
	I0810 22:34:10.131266    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0810 22:34:10.131289    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0810 22:34:10.131302    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0810 22:34:10.131314    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0810 22:34:10.131385    4347 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/30291.pem (1338 bytes)
	W0810 22:34:10.131437    4347 certs.go:369] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/30291_empty.pem, impossibly tiny 0 bytes
	I0810 22:34:10.131458    4347 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem (1679 bytes)
	I0810 22:34:10.131509    4347 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem (1082 bytes)
	I0810 22:34:10.131548    4347 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem (1123 bytes)
	I0810 22:34:10.131581    4347 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem (1679 bytes)
	I0810 22:34:10.131690    4347 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/302912.pem (1708 bytes)
	I0810 22:34:10.131731    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/30291.pem -> /usr/share/ca-certificates/30291.pem
	I0810 22:34:10.131749    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/302912.pem -> /usr/share/ca-certificates/302912.pem
	I0810 22:34:10.131765    4347 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0810 22:34:10.132310    4347 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0810 22:34:10.150840    4347 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0810 22:34:10.167571    4347 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0810 22:34:10.185093    4347 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0810 22:34:10.201630    4347 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/30291.pem --> /usr/share/ca-certificates/30291.pem (1338 bytes)
	I0810 22:34:10.218132    4347 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/302912.pem --> /usr/share/ca-certificates/302912.pem (1708 bytes)
	I0810 22:34:10.236656    4347 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0810 22:34:10.254374    4347 ssh_runner.go:149] Run: openssl version
	I0810 22:34:10.260368    4347 command_runner.go:124] > OpenSSL 1.1.1k  25 Mar 2021
	I0810 22:34:10.260962    4347 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/30291.pem && ln -fs /usr/share/ca-certificates/30291.pem /etc/ssl/certs/30291.pem"
	I0810 22:34:10.269493    4347 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/30291.pem
	I0810 22:34:10.274088    4347 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Aug 10 22:27 /usr/share/ca-certificates/30291.pem
	I0810 22:34:10.274255    4347 certs.go:416] hashing: -rw-r--r-- 1 root root 1338 Aug 10 22:27 /usr/share/ca-certificates/30291.pem
	I0810 22:34:10.274292    4347 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/30291.pem
	I0810 22:34:10.280076    4347 command_runner.go:124] > 51391683
	I0810 22:34:10.280380    4347 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/30291.pem /etc/ssl/certs/51391683.0"
	I0810 22:34:10.288896    4347 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/302912.pem && ln -fs /usr/share/ca-certificates/302912.pem /etc/ssl/certs/302912.pem"
	I0810 22:34:10.297176    4347 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/302912.pem
	I0810 22:34:10.302118    4347 command_runner.go:124] > -rw-r--r-- 1 root root 1708 Aug 10 22:27 /usr/share/ca-certificates/302912.pem
	I0810 22:34:10.302149    4347 certs.go:416] hashing: -rw-r--r-- 1 root root 1708 Aug 10 22:27 /usr/share/ca-certificates/302912.pem
	I0810 22:34:10.302187    4347 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/302912.pem
	I0810 22:34:10.308450    4347 command_runner.go:124] > 3ec20f2e
	I0810 22:34:10.308502    4347 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/302912.pem /etc/ssl/certs/3ec20f2e.0"
	I0810 22:34:10.316573    4347 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0810 22:34:10.324873    4347 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0810 22:34:10.329713    4347 command_runner.go:124] > -rw-r--r-- 1 root root 1111 Aug 10 22:18 /usr/share/ca-certificates/minikubeCA.pem
	I0810 22:34:10.329750    4347 certs.go:416] hashing: -rw-r--r-- 1 root root 1111 Aug 10 22:18 /usr/share/ca-certificates/minikubeCA.pem
	I0810 22:34:10.329791    4347 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0810 22:34:10.335731    4347 command_runner.go:124] > b5213941
	I0810 22:34:10.335799    4347 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0810 22:34:10.345248    4347 ssh_runner.go:149] Run: crio config
	I0810 22:34:10.593377    4347 command_runner.go:124] ! time="2021-08-10T22:34:10Z" level=info msg="Starting CRI-O, version: 1.20.2, git: d5a999ad0a35d895ded554e1e18c142075501a98(clean)"
	I0810 22:34:10.594877    4347 command_runner.go:124] ! time="2021-08-10T22:34:10Z" level=warning msg="The 'registries' option in crio.conf(5) (referenced in \"/etc/crio/crio.conf\") has been deprecated and will be removed with CRI-O 1.21."
	I0810 22:34:10.594940    4347 command_runner.go:124] ! time="2021-08-10T22:34:10Z" level=warning msg="Please refer to containers-registries.conf(5) for configuring unqualified-search registries."
	I0810 22:34:10.597390    4347 command_runner.go:124] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0810 22:34:10.605019    4347 command_runner.go:124] > # The CRI-O configuration file specifies all of the available configuration
	I0810 22:34:10.605047    4347 command_runner.go:124] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0810 22:34:10.605062    4347 command_runner.go:124] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0810 22:34:10.605068    4347 command_runner.go:124] > #
	I0810 22:34:10.605083    4347 command_runner.go:124] > # Please refer to crio.conf(5) for details of all configuration options.
	I0810 22:34:10.605096    4347 command_runner.go:124] > # CRI-O supports partial configuration reload during runtime, which can be
	I0810 22:34:10.605110    4347 command_runner.go:124] > # done by sending SIGHUP to the running process. Currently supported options
	I0810 22:34:10.605121    4347 command_runner.go:124] > # are explicitly mentioned with: 'This option supports live configuration
	I0810 22:34:10.605127    4347 command_runner.go:124] > # reload'.
	I0810 22:34:10.605134    4347 command_runner.go:124] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0810 22:34:10.605143    4347 command_runner.go:124] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0810 22:34:10.605152    4347 command_runner.go:124] > # you want to change the system's defaults. If you want to modify storage just
	I0810 22:34:10.605163    4347 command_runner.go:124] > # for CRI-O, you can change the storage configuration options here.
	I0810 22:34:10.605170    4347 command_runner.go:124] > [crio]
	I0810 22:34:10.605177    4347 command_runner.go:124] > # Path to the "root directory". CRI-O stores all of its data, including
	I0810 22:34:10.605184    4347 command_runner.go:124] > # containers images, in this directory.
	I0810 22:34:10.605189    4347 command_runner.go:124] > #root = "/var/lib/containers/storage"
	I0810 22:34:10.605204    4347 command_runner.go:124] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0810 22:34:10.605215    4347 command_runner.go:124] > #runroot = "/var/run/containers/storage"
	I0810 22:34:10.605229    4347 command_runner.go:124] > # Storage driver used to manage the storage of images and containers. Please
	I0810 22:34:10.605242    4347 command_runner.go:124] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0810 22:34:10.605252    4347 command_runner.go:124] > #storage_driver = "overlay"
	I0810 22:34:10.605261    4347 command_runner.go:124] > # List to pass options to the storage driver. Please refer to
	I0810 22:34:10.605273    4347 command_runner.go:124] > # containers-storage.conf(5) to see all available storage options.
	I0810 22:34:10.605282    4347 command_runner.go:124] > #storage_option = [
	I0810 22:34:10.605287    4347 command_runner.go:124] > #]
	I0810 22:34:10.605302    4347 command_runner.go:124] > # The default log directory where all logs will go unless directly specified by
	I0810 22:34:10.605314    4347 command_runner.go:124] > # the kubelet. The log directory specified must be an absolute directory.
	I0810 22:34:10.605324    4347 command_runner.go:124] > log_dir = "/var/log/crio/pods"
	I0810 22:34:10.605336    4347 command_runner.go:124] > # Location for CRI-O to lay down the temporary version file.
	I0810 22:34:10.605348    4347 command_runner.go:124] > # It is used to check if crio wipe should wipe containers, which should
	I0810 22:34:10.605355    4347 command_runner.go:124] > # always happen on a node reboot
	I0810 22:34:10.605360    4347 command_runner.go:124] > version_file = "/var/run/crio/version"
	I0810 22:34:10.605368    4347 command_runner.go:124] > # Location for CRI-O to lay down the persistent version file.
	I0810 22:34:10.605375    4347 command_runner.go:124] > # It is used to check if crio wipe should wipe images, which should
	I0810 22:34:10.605384    4347 command_runner.go:124] > # only happen when CRI-O has been upgraded
	I0810 22:34:10.605398    4347 command_runner.go:124] > version_file_persist = "/var/lib/crio/version"
	I0810 22:34:10.605407    4347 command_runner.go:124] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0810 22:34:10.605412    4347 command_runner.go:124] > [crio.api]
	I0810 22:34:10.605420    4347 command_runner.go:124] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0810 22:34:10.605425    4347 command_runner.go:124] > listen = "/var/run/crio/crio.sock"
	I0810 22:34:10.605432    4347 command_runner.go:124] > # IP address on which the stream server will listen.
	I0810 22:34:10.605437    4347 command_runner.go:124] > stream_address = "127.0.0.1"
	I0810 22:34:10.605444    4347 command_runner.go:124] > # The port on which the stream server will listen. If the port is set to "0", then
	I0810 22:34:10.605452    4347 command_runner.go:124] > # CRI-O will allocate a random free port number.
	I0810 22:34:10.605456    4347 command_runner.go:124] > stream_port = "0"
	I0810 22:34:10.605462    4347 command_runner.go:124] > # Enable encrypted TLS transport of the stream server.
	I0810 22:34:10.605467    4347 command_runner.go:124] > stream_enable_tls = false
	I0810 22:34:10.605476    4347 command_runner.go:124] > # Length of time until open streams terminate due to lack of activity
	I0810 22:34:10.605481    4347 command_runner.go:124] > stream_idle_timeout = ""
	I0810 22:34:10.605487    4347 command_runner.go:124] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0810 22:34:10.605496    4347 command_runner.go:124] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0810 22:34:10.605503    4347 command_runner.go:124] > # minutes.
	I0810 22:34:10.605506    4347 command_runner.go:124] > stream_tls_cert = ""
	I0810 22:34:10.605513    4347 command_runner.go:124] > # Path to the key file used to serve the encrypted stream. This file can
	I0810 22:34:10.605521    4347 command_runner.go:124] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0810 22:34:10.605525    4347 command_runner.go:124] > stream_tls_key = ""
	I0810 22:34:10.605531    4347 command_runner.go:124] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0810 22:34:10.605540    4347 command_runner.go:124] > # communication with the encrypted stream. This file can change and CRI-O will
	I0810 22:34:10.605546    4347 command_runner.go:124] > # automatically pick up the changes within 5 minutes.
	I0810 22:34:10.605552    4347 command_runner.go:124] > stream_tls_ca = ""
	I0810 22:34:10.605560    4347 command_runner.go:124] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0810 22:34:10.605567    4347 command_runner.go:124] > grpc_max_send_msg_size = 16777216
	I0810 22:34:10.605576    4347 command_runner.go:124] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0810 22:34:10.605582    4347 command_runner.go:124] > grpc_max_recv_msg_size = 16777216
	I0810 22:34:10.605589    4347 command_runner.go:124] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0810 22:34:10.605597    4347 command_runner.go:124] > # and options for how to set up and manage the OCI runtime.
	I0810 22:34:10.605601    4347 command_runner.go:124] > [crio.runtime]
	I0810 22:34:10.605607    4347 command_runner.go:124] > # A list of ulimits to be set in containers by default, specified as
	I0810 22:34:10.605614    4347 command_runner.go:124] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0810 22:34:10.605618    4347 command_runner.go:124] > # "nofile=1024:2048"
	I0810 22:34:10.605624    4347 command_runner.go:124] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0810 22:34:10.605630    4347 command_runner.go:124] > #default_ulimits = [
	I0810 22:34:10.605633    4347 command_runner.go:124] > #]
	I0810 22:34:10.605640    4347 command_runner.go:124] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0810 22:34:10.605647    4347 command_runner.go:124] > no_pivot = false
	I0810 22:34:10.605656    4347 command_runner.go:124] > # decryption_keys_path is the path where the keys required for
	I0810 22:34:10.605671    4347 command_runner.go:124] > # image decryption are stored. This option supports live configuration reload.
	I0810 22:34:10.605679    4347 command_runner.go:124] > decryption_keys_path = "/etc/crio/keys/"
	I0810 22:34:10.605685    4347 command_runner.go:124] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0810 22:34:10.605692    4347 command_runner.go:124] > # Will be searched for using $PATH if empty.
	I0810 22:34:10.605699    4347 command_runner.go:124] > conmon = "/usr/libexec/crio/conmon"
	I0810 22:34:10.605706    4347 command_runner.go:124] > # Cgroup setting for conmon
	I0810 22:34:10.605711    4347 command_runner.go:124] > conmon_cgroup = "system.slice"
	I0810 22:34:10.605726    4347 command_runner.go:124] > # Environment variable list for the conmon process, used for passing necessary
	I0810 22:34:10.605734    4347 command_runner.go:124] > # environment variables to conmon or the runtime.
	I0810 22:34:10.605738    4347 command_runner.go:124] > conmon_env = [
	I0810 22:34:10.605744    4347 command_runner.go:124] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0810 22:34:10.605749    4347 command_runner.go:124] > ]
	I0810 22:34:10.605755    4347 command_runner.go:124] > # Additional environment variables to set for all the
	I0810 22:34:10.605770    4347 command_runner.go:124] > # containers. These are overridden if set in the
	I0810 22:34:10.605776    4347 command_runner.go:124] > # container image spec or in the container runtime configuration.
	I0810 22:34:10.605783    4347 command_runner.go:124] > default_env = [
	I0810 22:34:10.605786    4347 command_runner.go:124] > ]
	I0810 22:34:10.605792    4347 command_runner.go:124] > # If true, SELinux will be used for pod separation on the host.
	I0810 22:34:10.605798    4347 command_runner.go:124] > selinux = false
	I0810 22:34:10.605805    4347 command_runner.go:124] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0810 22:34:10.605814    4347 command_runner.go:124] > # for the runtime. If not specified, then the internal default seccomp profile
	I0810 22:34:10.605820    4347 command_runner.go:124] > # will be used. This option supports live configuration reload.
	I0810 22:34:10.605826    4347 command_runner.go:124] > seccomp_profile = ""
	I0810 22:34:10.605835    4347 command_runner.go:124] > # Changes the meaning of an empty seccomp profile. By default
	I0810 22:34:10.605845    4347 command_runner.go:124] > # (and according to CRI spec), an empty profile means unconfined.
	I0810 22:34:10.605851    4347 command_runner.go:124] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0810 22:34:10.605860    4347 command_runner.go:124] > # which might increase security.
	I0810 22:34:10.605865    4347 command_runner.go:124] > seccomp_use_default_when_empty = false
	I0810 22:34:10.605874    4347 command_runner.go:124] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0810 22:34:10.605881    4347 command_runner.go:124] > # profile name is "crio-default". This profile only takes effect if the user
	I0810 22:34:10.605890    4347 command_runner.go:124] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0810 22:34:10.605896    4347 command_runner.go:124] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0810 22:34:10.605904    4347 command_runner.go:124] > # This option supports live configuration reload.
	I0810 22:34:10.605908    4347 command_runner.go:124] > apparmor_profile = "crio-default"
	I0810 22:34:10.605916    4347 command_runner.go:124] > # Used to change irqbalance service config file path which is used for configuring
	I0810 22:34:10.605922    4347 command_runner.go:124] > # irqbalance daemon.
	I0810 22:34:10.605927    4347 command_runner.go:124] > irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0810 22:34:10.605935    4347 command_runner.go:124] > # Cgroup management implementation used for the runtime.
	I0810 22:34:10.605941    4347 command_runner.go:124] > cgroup_manager = "systemd"
	I0810 22:34:10.605949    4347 command_runner.go:124] > # Specify whether the image pull must be performed in a separate cgroup.
	I0810 22:34:10.605954    4347 command_runner.go:124] > separate_pull_cgroup = ""
	I0810 22:34:10.605960    4347 command_runner.go:124] > # List of default capabilities for containers. If it is empty or commented out,
	I0810 22:34:10.605969    4347 command_runner.go:124] > # only the capabilities defined in the containers json file by the user/kube
	I0810 22:34:10.605973    4347 command_runner.go:124] > # will be added.
	I0810 22:34:10.605977    4347 command_runner.go:124] > default_capabilities = [
	I0810 22:34:10.605980    4347 command_runner.go:124] > 	"CHOWN",
	I0810 22:34:10.605985    4347 command_runner.go:124] > 	"DAC_OVERRIDE",
	I0810 22:34:10.605994    4347 command_runner.go:124] > 	"FSETID",
	I0810 22:34:10.605999    4347 command_runner.go:124] > 	"FOWNER",
	I0810 22:34:10.606005    4347 command_runner.go:124] > 	"SETGID",
	I0810 22:34:10.606010    4347 command_runner.go:124] > 	"SETUID",
	I0810 22:34:10.606015    4347 command_runner.go:124] > 	"SETPCAP",
	I0810 22:34:10.606021    4347 command_runner.go:124] > 	"NET_BIND_SERVICE",
	I0810 22:34:10.606027    4347 command_runner.go:124] > 	"KILL",
	I0810 22:34:10.606031    4347 command_runner.go:124] > ]
	I0810 22:34:10.606041    4347 command_runner.go:124] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0810 22:34:10.606051    4347 command_runner.go:124] > # defined in the container json file by the user/kube will be added.
	I0810 22:34:10.606057    4347 command_runner.go:124] > default_sysctls = [
	I0810 22:34:10.606063    4347 command_runner.go:124] > ]
	I0810 22:34:10.606070    4347 command_runner.go:124] > # List of additional devices. specified as
	I0810 22:34:10.606082    4347 command_runner.go:124] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0810 22:34:10.606092    4347 command_runner.go:124] > #If it is empty or commented out, only the devices
	I0810 22:34:10.606101    4347 command_runner.go:124] > # defined in the container json file by the user/kube will be added.
	I0810 22:34:10.606108    4347 command_runner.go:124] > additional_devices = [
	I0810 22:34:10.606113    4347 command_runner.go:124] > ]
	I0810 22:34:10.606124    4347 command_runner.go:124] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0810 22:34:10.606134    4347 command_runner.go:124] > # directories does not exist, then CRI-O will automatically skip them.
	I0810 22:34:10.606140    4347 command_runner.go:124] > hooks_dir = [
	I0810 22:34:10.606148    4347 command_runner.go:124] > 	"/usr/share/containers/oci/hooks.d",
	I0810 22:34:10.606152    4347 command_runner.go:124] > ]
	I0810 22:34:10.606163    4347 command_runner.go:124] > # Path to the file specifying the defaults mounts for each container. The
	I0810 22:34:10.606173    4347 command_runner.go:124] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0810 22:34:10.606183    4347 command_runner.go:124] > # its default mounts from the following two files:
	I0810 22:34:10.606188    4347 command_runner.go:124] > #
	I0810 22:34:10.606200    4347 command_runner.go:124] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0810 22:34:10.606207    4347 command_runner.go:124] > #      override file, where users can either add in their own default mounts, or
	I0810 22:34:10.606216    4347 command_runner.go:124] > #      override the default mounts shipped with the package.
	I0810 22:34:10.606222    4347 command_runner.go:124] > #
	I0810 22:34:10.606228    4347 command_runner.go:124] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0810 22:34:10.606236    4347 command_runner.go:124] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0810 22:34:10.606243    4347 command_runner.go:124] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0810 22:34:10.606249    4347 command_runner.go:124] > #      only add mounts it finds in this file.
	I0810 22:34:10.606253    4347 command_runner.go:124] > #
	I0810 22:34:10.606257    4347 command_runner.go:124] > #default_mounts_file = ""
	I0810 22:34:10.606262    4347 command_runner.go:124] > # Maximum number of processes allowed in a container.
	I0810 22:34:10.606267    4347 command_runner.go:124] > pids_limit = 1024
	I0810 22:34:10.606273    4347 command_runner.go:124] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0810 22:34:10.606280    4347 command_runner.go:124] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0810 22:34:10.606287    4347 command_runner.go:124] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0810 22:34:10.606295    4347 command_runner.go:124] > # limit is never exceeded.
	I0810 22:34:10.606300    4347 command_runner.go:124] > log_size_max = -1
	I0810 22:34:10.606322    4347 command_runner.go:124] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0810 22:34:10.606329    4347 command_runner.go:124] > log_to_journald = false
	I0810 22:34:10.606335    4347 command_runner.go:124] > # Path to directory in which container exit files are written to by conmon.
	I0810 22:34:10.606342    4347 command_runner.go:124] > container_exits_dir = "/var/run/crio/exits"
	I0810 22:34:10.606347    4347 command_runner.go:124] > # Path to directory for container attach sockets.
	I0810 22:34:10.606352    4347 command_runner.go:124] > container_attach_socket_dir = "/var/run/crio"
	I0810 22:34:10.606358    4347 command_runner.go:124] > # The prefix to use for the source of the bind mounts.
	I0810 22:34:10.606362    4347 command_runner.go:124] > bind_mount_prefix = ""
	I0810 22:34:10.606368    4347 command_runner.go:124] > # If set to true, all containers will run in read-only mode.
	I0810 22:34:10.606372    4347 command_runner.go:124] > read_only = false
	I0810 22:34:10.606378    4347 command_runner.go:124] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0810 22:34:10.606387    4347 command_runner.go:124] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0810 22:34:10.606391    4347 command_runner.go:124] > # live configuration reload.
	I0810 22:34:10.606395    4347 command_runner.go:124] > log_level = "info"
	I0810 22:34:10.606403    4347 command_runner.go:124] > # Filter the log messages by the provided regular expression.
	I0810 22:34:10.606409    4347 command_runner.go:124] > # This option supports live configuration reload.
	I0810 22:34:10.606413    4347 command_runner.go:124] > log_filter = ""
	I0810 22:34:10.606419    4347 command_runner.go:124] > # The UID mappings for the user namespace of each container. A range is
	I0810 22:34:10.606426    4347 command_runner.go:124] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0810 22:34:10.606430    4347 command_runner.go:124] > # separated by comma.
	I0810 22:34:10.606433    4347 command_runner.go:124] > uid_mappings = ""
	I0810 22:34:10.606440    4347 command_runner.go:124] > # The GID mappings for the user namespace of each container. A range is
	I0810 22:34:10.606446    4347 command_runner.go:124] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0810 22:34:10.606451    4347 command_runner.go:124] > # separated by comma.
	I0810 22:34:10.606454    4347 command_runner.go:124] > gid_mappings = ""
	I0810 22:34:10.606463    4347 command_runner.go:124] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0810 22:34:10.606475    4347 command_runner.go:124] > # regarding the proper termination of the container. The lowest possible
	I0810 22:34:10.606484    4347 command_runner.go:124] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0810 22:34:10.606490    4347 command_runner.go:124] > ctr_stop_timeout = 30
	I0810 22:34:10.606500    4347 command_runner.go:124] > # manage_ns_lifecycle determines whether we pin and remove namespaces
	I0810 22:34:10.606507    4347 command_runner.go:124] > # and manage their lifecycle.
	I0810 22:34:10.606514    4347 command_runner.go:124] > # This option is being deprecated, and will be unconditionally true in the future.
	I0810 22:34:10.606518    4347 command_runner.go:124] > manage_ns_lifecycle = true
	I0810 22:34:10.606524    4347 command_runner.go:124] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0810 22:34:10.606532    4347 command_runner.go:124] > # when a pod does not have a private PID namespace, and does not use
	I0810 22:34:10.606537    4347 command_runner.go:124] > # a kernel separating runtime (like kata).
	I0810 22:34:10.606544    4347 command_runner.go:124] > # It requires manage_ns_lifecycle to be true.
	I0810 22:34:10.606549    4347 command_runner.go:124] > drop_infra_ctr = false
	I0810 22:34:10.606555    4347 command_runner.go:124] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0810 22:34:10.606565    4347 command_runner.go:124] > # You can use linux CPU list format to specify desired CPUs.
	I0810 22:34:10.606578    4347 command_runner.go:124] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0810 22:34:10.606587    4347 command_runner.go:124] > # infra_ctr_cpuset = ""
	I0810 22:34:10.606596    4347 command_runner.go:124] > # The directory where the state of the managed namespaces gets tracked.
	I0810 22:34:10.606605    4347 command_runner.go:124] > # Only used when manage_ns_lifecycle is true.
	I0810 22:34:10.606610    4347 command_runner.go:124] > namespaces_dir = "/var/run"
	I0810 22:34:10.606620    4347 command_runner.go:124] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0810 22:34:10.606624    4347 command_runner.go:124] > pinns_path = "/usr/bin/pinns"
	I0810 22:34:10.606631    4347 command_runner.go:124] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0810 22:34:10.606639    4347 command_runner.go:124] > # The name is matched against the runtimes map below. If this value is changed,
	I0810 22:34:10.606645    4347 command_runner.go:124] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0810 22:34:10.606650    4347 command_runner.go:124] > default_runtime = "runc"
	I0810 22:34:10.606657    4347 command_runner.go:124] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0810 22:34:10.606664    4347 command_runner.go:124] > # The runtime to use is picked based on the runtime_handler provided by the CRI.
	I0810 22:34:10.606671    4347 command_runner.go:124] > # If no runtime_handler is provided, the runtime will be picked based on the level
	I0810 22:34:10.606679    4347 command_runner.go:124] > # of trust of the workload. Each entry in the table should follow the format:
	I0810 22:34:10.606682    4347 command_runner.go:124] > #
	I0810 22:34:10.606687    4347 command_runner.go:124] > #[crio.runtime.runtimes.runtime-handler]
	I0810 22:34:10.606693    4347 command_runner.go:124] > #  runtime_path = "/path/to/the/executable"
	I0810 22:34:10.606697    4347 command_runner.go:124] > #  runtime_type = "oci"
	I0810 22:34:10.606702    4347 command_runner.go:124] > #  runtime_root = "/path/to/the/root"
	I0810 22:34:10.606708    4347 command_runner.go:124] > #  privileged_without_host_devices = false
	I0810 22:34:10.606712    4347 command_runner.go:124] > #  allowed_annotations = []
	I0810 22:34:10.606719    4347 command_runner.go:124] > # Where:
	I0810 22:34:10.606725    4347 command_runner.go:124] > # - runtime-handler: name used to identify the runtime
	I0810 22:34:10.606734    4347 command_runner.go:124] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0810 22:34:10.606743    4347 command_runner.go:124] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0810 22:34:10.606750    4347 command_runner.go:124] > #   the runtime executable name, and the runtime executable should be placed
	I0810 22:34:10.606753    4347 command_runner.go:124] > #   in $PATH.
	I0810 22:34:10.606760    4347 command_runner.go:124] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0810 22:34:10.606766    4347 command_runner.go:124] > #   omitted, an "oci" runtime is assumed.
	I0810 22:34:10.606772    4347 command_runner.go:124] > # - runtime_root (optional, string): root directory for storage of containers
	I0810 22:34:10.606775    4347 command_runner.go:124] > #   state.
	I0810 22:34:10.606782    4347 command_runner.go:124] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0810 22:34:10.606789    4347 command_runner.go:124] > #   host devices from being passed to privileged containers.
	I0810 22:34:10.606795    4347 command_runner.go:124] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0810 22:34:10.606805    4347 command_runner.go:124] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0810 22:34:10.606811    4347 command_runner.go:124] > #   The currently recognized values are:
	I0810 22:34:10.606818    4347 command_runner.go:124] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0810 22:34:10.606825    4347 command_runner.go:124] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0810 22:34:10.606831    4347 command_runner.go:124] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0810 22:34:10.606836    4347 command_runner.go:124] > [crio.runtime.runtimes.runc]
	I0810 22:34:10.606841    4347 command_runner.go:124] > runtime_path = "/usr/bin/runc"
	I0810 22:34:10.606845    4347 command_runner.go:124] > runtime_type = "oci"
	I0810 22:34:10.606849    4347 command_runner.go:124] > runtime_root = "/run/runc"
	I0810 22:34:10.606856    4347 command_runner.go:124] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0810 22:34:10.606861    4347 command_runner.go:124] > # running containers
	I0810 22:34:10.606865    4347 command_runner.go:124] > #[crio.runtime.runtimes.crun]
	I0810 22:34:10.606873    4347 command_runner.go:124] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0810 22:34:10.606882    4347 command_runner.go:124] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0810 22:34:10.606888    4347 command_runner.go:124] > # surface and mitigating the consequences of containers breakout.
	I0810 22:34:10.606895    4347 command_runner.go:124] > # Kata Containers with the default configured VMM
	I0810 22:34:10.606900    4347 command_runner.go:124] > #[crio.runtime.runtimes.kata-runtime]
	I0810 22:34:10.606904    4347 command_runner.go:124] > # Kata Containers with the QEMU VMM
	I0810 22:34:10.606910    4347 command_runner.go:124] > #[crio.runtime.runtimes.kata-qemu]
	I0810 22:34:10.606914    4347 command_runner.go:124] > # Kata Containers with the Firecracker VMM
	I0810 22:34:10.606919    4347 command_runner.go:124] > #[crio.runtime.runtimes.kata-fc]
	I0810 22:34:10.606926    4347 command_runner.go:124] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0810 22:34:10.606931    4347 command_runner.go:124] > #
	I0810 22:34:10.606937    4347 command_runner.go:124] > # CRI-O reads its configured registries defaults from the system wide
	I0810 22:34:10.606943    4347 command_runner.go:124] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0810 22:34:10.606950    4347 command_runner.go:124] > # you want to modify just CRI-O, you can change the registries configuration in
	I0810 22:34:10.606957    4347 command_runner.go:124] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0810 22:34:10.606964    4347 command_runner.go:124] > # use the system's defaults from /etc/containers/registries.conf.
	I0810 22:34:10.606970    4347 command_runner.go:124] > [crio.image]
	I0810 22:34:10.606977    4347 command_runner.go:124] > # Default transport for pulling images from a remote container storage.
	I0810 22:34:10.606981    4347 command_runner.go:124] > default_transport = "docker://"
	I0810 22:34:10.606991    4347 command_runner.go:124] > # The path to a file containing credentials necessary for pulling images from
	I0810 22:34:10.607004    4347 command_runner.go:124] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0810 22:34:10.607010    4347 command_runner.go:124] > global_auth_file = ""
	I0810 22:34:10.607019    4347 command_runner.go:124] > # The image used to instantiate infra containers.
	I0810 22:34:10.607026    4347 command_runner.go:124] > # This option supports live configuration reload.
	I0810 22:34:10.607035    4347 command_runner.go:124] > pause_image = "k8s.gcr.io/pause:3.4.1"
	I0810 22:34:10.607045    4347 command_runner.go:124] > # The path to a file containing credentials specific for pulling the pause_image from
	I0810 22:34:10.607058    4347 command_runner.go:124] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0810 22:34:10.607068    4347 command_runner.go:124] > # This option supports live configuration reload.
	I0810 22:34:10.607073    4347 command_runner.go:124] > pause_image_auth_file = ""
	I0810 22:34:10.607083    4347 command_runner.go:124] > # The command to run to have a container stay in the paused state.
	I0810 22:34:10.607092    4347 command_runner.go:124] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0810 22:34:10.607104    4347 command_runner.go:124] > # specified in the pause image. When commented out, it will fallback to the
	I0810 22:34:10.607116    4347 command_runner.go:124] > # default: "/pause". This option supports live configuration reload.
	I0810 22:34:10.607125    4347 command_runner.go:124] > pause_command = "/pause"
	I0810 22:34:10.607135    4347 command_runner.go:124] > # Path to the file which decides what sort of policy we use when deciding
	I0810 22:34:10.607148    4347 command_runner.go:124] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0810 22:34:10.607159    4347 command_runner.go:124] > # this option be used, as the default behavior of using the system-wide default
	I0810 22:34:10.607171    4347 command_runner.go:124] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0810 22:34:10.607180    4347 command_runner.go:124] > # refer to containers-policy.json(5) for more details.
	I0810 22:34:10.607186    4347 command_runner.go:124] > signature_policy = ""
	I0810 22:34:10.607197    4347 command_runner.go:124] > # List of registries to skip TLS verification for pulling images. Please
	I0810 22:34:10.607205    4347 command_runner.go:124] > # consider configuring the registries via /etc/containers/registries.conf before
	I0810 22:34:10.607213    4347 command_runner.go:124] > # changing them here.
	I0810 22:34:10.607219    4347 command_runner.go:124] > #insecure_registries = "[]"
	I0810 22:34:10.607230    4347 command_runner.go:124] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0810 22:34:10.607240    4347 command_runner.go:124] > # ignore; the latter will ignore volumes entirely.
	I0810 22:34:10.607246    4347 command_runner.go:124] > image_volumes = "mkdir"
	I0810 22:34:10.607261    4347 command_runner.go:124] > # List of registries to be used when pulling an unqualified image (e.g.,
	I0810 22:34:10.607273    4347 command_runner.go:124] > # "alpine:latest"). By default, registries is set to "docker.io" for
	I0810 22:34:10.607280    4347 command_runner.go:124] > # compatibility reasons. Depending on your workload and usecase you may add more
	I0810 22:34:10.607287    4347 command_runner.go:124] > # registries (e.g., "quay.io", "registry.fedoraproject.org",
	I0810 22:34:10.607292    4347 command_runner.go:124] > # "registry.opensuse.org", etc.).
	I0810 22:34:10.607296    4347 command_runner.go:124] > #registries = [
	I0810 22:34:10.607300    4347 command_runner.go:124] > # 	"docker.io",
	I0810 22:34:10.607303    4347 command_runner.go:124] > #]
	I0810 22:34:10.607311    4347 command_runner.go:124] > # Temporary directory to use for storing big files
	I0810 22:34:10.607316    4347 command_runner.go:124] > big_files_temporary_dir = ""
	I0810 22:34:10.607323    4347 command_runner.go:124] > # The crio.network table containers settings pertaining to the management of
	I0810 22:34:10.607327    4347 command_runner.go:124] > # CNI plugins.
	I0810 22:34:10.607331    4347 command_runner.go:124] > [crio.network]
	I0810 22:34:10.607337    4347 command_runner.go:124] > # The default CNI network name to be selected. If not set or "", then
	I0810 22:34:10.607343    4347 command_runner.go:124] > # CRI-O will pick-up the first one found in network_dir.
	I0810 22:34:10.607348    4347 command_runner.go:124] > # cni_default_network = "kindnet"
	I0810 22:34:10.607355    4347 command_runner.go:124] > # Path to the directory where CNI configuration files are located.
	I0810 22:34:10.607360    4347 command_runner.go:124] > network_dir = "/etc/cni/net.d/"
	I0810 22:34:10.607368    4347 command_runner.go:124] > # Paths to directories where CNI plugin binaries are located.
	I0810 22:34:10.607374    4347 command_runner.go:124] > plugin_dirs = [
	I0810 22:34:10.607379    4347 command_runner.go:124] > 	"/opt/cni/bin/",
	I0810 22:34:10.607382    4347 command_runner.go:124] > ]
	I0810 22:34:10.607388    4347 command_runner.go:124] > # A necessary configuration for Prometheus based metrics retrieval
	I0810 22:34:10.607392    4347 command_runner.go:124] > [crio.metrics]
	I0810 22:34:10.607397    4347 command_runner.go:124] > # Globally enable or disable metrics support.
	I0810 22:34:10.607403    4347 command_runner.go:124] > enable_metrics = true
	I0810 22:34:10.607408    4347 command_runner.go:124] > # The port on which the metrics server will listen.
	I0810 22:34:10.607412    4347 command_runner.go:124] > metrics_port = 9090
	I0810 22:34:10.607435    4347 command_runner.go:124] > # Local socket path to bind the metrics server to
	I0810 22:34:10.607442    4347 command_runner.go:124] > metrics_socket = ""
	I0810 22:34:10.607505    4347 cni.go:93] Creating CNI manager for ""
	I0810 22:34:10.607516    4347 cni.go:154] 2 nodes found, recommending kindnet
	I0810 22:34:10.607526    4347 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0810 22:34:10.607539    4347 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.251 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-20210810223223-30291 NodeName:multinode-20210810223223-30291-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.50.251 CgroupDriver:systemd ClientCAF
ile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0810 22:34:10.607651    4347 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.251
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "multinode-20210810223223-30291-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.251
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0810 22:34:10.607723    4347 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --hostname-override=multinode-20210810223223-30291-m02 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.251 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:multinode-20210810223223-30291 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0810 22:34:10.607774    4347 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0810 22:34:10.615456    4347 command_runner.go:124] > kubeadm
	I0810 22:34:10.615473    4347 command_runner.go:124] > kubectl
	I0810 22:34:10.615478    4347 command_runner.go:124] > kubelet
	I0810 22:34:10.615657    4347 binaries.go:44] Found k8s binaries, skipping transfer
	I0810 22:34:10.615722    4347 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0810 22:34:10.622468    4347 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (515 bytes)
	I0810 22:34:10.634378    4347 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0810 22:34:10.646471    4347 ssh_runner.go:149] Run: grep 192.168.50.32	control-plane.minikube.internal$ /etc/hosts
	I0810 22:34:10.650829    4347 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.32	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0810 22:34:10.661317    4347 host.go:66] Checking if "multinode-20210810223223-30291" exists ...
	I0810 22:34:10.661683    4347 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:34:10.661730    4347 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:34:10.673179    4347 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:38391
	I0810 22:34:10.673630    4347 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:34:10.674120    4347 main.go:130] libmachine: Using API Version  1
	I0810 22:34:10.674143    4347 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:34:10.674451    4347 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:34:10.674631    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .DriverName
	I0810 22:34:10.674745    4347 start.go:241] JoinCluster: &{Name:multinode-20210810223223-30291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12122/minikube-v1.22.0-1628238775-12122.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:multinode-202
10810223223-30291 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.32 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true} {Name:m02 IP:192.168.50.251 Port:0 KubernetesVersion:v1.21.3 ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0}
	I0810 22:34:10.674843    4347 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm token create --print-join-command --ttl=0"
	I0810 22:34:10.674867    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHHostname
	I0810 22:34:10.680033    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:34:10.680419    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:89", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:32:38 +0000 UTC Type:0 Mac:52:54:00:ce:d8:89 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:multinode-20210810223223-30291 Clientid:01:52:54:00:ce:d8:89}
	I0810 22:34:10.680440    4347 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined IP address 192.168.50.32 and MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:34:10.680578    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHPort
	I0810 22:34:10.680742    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHKeyPath
	I0810 22:34:10.680874    4347 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHUsername
	I0810 22:34:10.681009    4347 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223223-30291/id_rsa Username:docker}
	I0810 22:34:12.440322    4347 command_runner.go:124] > kubeadm join control-plane.minikube.internal:8443 --token gytvgv.ywx6mbvc36673jsk --discovery-token-ca-cert-hash sha256:792de24c5d5a120bf4aa3a25755c9ac1b4ccaeb2dbca2444b5b705903a56bd34 
	I0810 22:34:12.440364    4347 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm token create --print-join-command --ttl=0": (1.765507011s)
	I0810 22:34:12.440402    4347 start.go:262] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.50.251 Port:0 KubernetesVersion:v1.21.3 ControlPlane:false Worker:true}
	I0810 22:34:12.440501    4347 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm join control-plane.minikube.internal:8443 --token gytvgv.ywx6mbvc36673jsk --discovery-token-ca-cert-hash sha256:792de24c5d5a120bf4aa3a25755c9ac1b4ccaeb2dbca2444b5b705903a56bd34 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-20210810223223-30291-m02"
	I0810 22:34:12.584570    4347 command_runner.go:124] > [preflight] Running pre-flight checks
	I0810 22:34:12.935834    4347 command_runner.go:124] > [preflight] Reading configuration from the cluster...
	I0810 22:34:12.935906    4347 command_runner.go:124] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0810 22:34:12.988878    4347 command_runner.go:124] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0810 22:34:12.989678    4347 command_runner.go:124] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0810 22:34:12.989758    4347 command_runner.go:124] > [kubelet-start] Starting the kubelet
	I0810 22:34:13.147943    4347 command_runner.go:124] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0810 22:34:19.235875    4347 command_runner.go:124] > This node has joined the cluster:
	I0810 22:34:19.235906    4347 command_runner.go:124] > * Certificate signing request was sent to apiserver and a response was received.
	I0810 22:34:19.235916    4347 command_runner.go:124] > * The Kubelet was informed of the new secure connection details.
	I0810 22:34:19.235926    4347 command_runner.go:124] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0810 22:34:19.238240    4347 command_runner.go:124] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0810 22:34:19.238276    4347 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm join control-plane.minikube.internal:8443 --token gytvgv.ywx6mbvc36673jsk --discovery-token-ca-cert-hash sha256:792de24c5d5a120bf4aa3a25755c9ac1b4ccaeb2dbca2444b5b705903a56bd34 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-20210810223223-30291-m02": (6.797755438s)
	I0810 22:34:19.238299    4347 ssh_runner.go:149] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0810 22:34:19.590115    4347 command_runner.go:124] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0810 22:34:19.590518    4347 start.go:243] JoinCluster complete in 8.915767718s
	I0810 22:34:19.590547    4347 cni.go:93] Creating CNI manager for ""
	I0810 22:34:19.590556    4347 cni.go:154] 2 nodes found, recommending kindnet
	I0810 22:34:19.590625    4347 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0810 22:34:19.596465    4347 command_runner.go:124] >   File: /opt/cni/bin/portmap
	I0810 22:34:19.596488    4347 command_runner.go:124] >   Size: 2853400   	Blocks: 5576       IO Block: 4096   regular file
	I0810 22:34:19.596499    4347 command_runner.go:124] > Device: 10h/16d	Inode: 22873       Links: 1
	I0810 22:34:19.596506    4347 command_runner.go:124] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0810 22:34:19.596511    4347 command_runner.go:124] > Access: 2021-08-10 22:32:38.220478056 +0000
	I0810 22:34:19.596517    4347 command_runner.go:124] > Modify: 2021-08-06 09:23:24.000000000 +0000
	I0810 22:34:19.596522    4347 command_runner.go:124] > Change: 2021-08-10 22:32:33.951478056 +0000
	I0810 22:34:19.596526    4347 command_runner.go:124] >  Birth: -
	I0810 22:34:19.596569    4347 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0810 22:34:19.596579    4347 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0810 22:34:19.609756    4347 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0810 22:34:19.905247    4347 command_runner.go:124] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0810 22:34:19.907679    4347 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0810 22:34:19.910391    4347 command_runner.go:124] > serviceaccount/kindnet unchanged
	I0810 22:34:19.923321    4347 command_runner.go:124] > daemonset.apps/kindnet configured
	I0810 22:34:19.926489    4347 start.go:226] Will wait 6m0s for node &{Name:m02 IP:192.168.50.251 Port:0 KubernetesVersion:v1.21.3 ControlPlane:false Worker:true}
	I0810 22:34:19.928537    4347 out.go:177] * Verifying Kubernetes components...
	I0810 22:34:19.928610    4347 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0810 22:34:19.940193    4347 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0810 22:34:19.940424    4347 kapi.go:59] client config for multinode-20210810223223-30291: &rest.Config{Host:"https://192.168.50.32:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-30291/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210810223223-302
91/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2660), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0810 22:34:19.941599    4347 node_ready.go:35] waiting up to 6m0s for node "multinode-20210810223223-30291-m02" to be "Ready" ...
	I0810 22:34:19.941667    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291-m02
	I0810 22:34:19.941675    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:19.941680    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:19.941687    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:19.944972    4347 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:34:19.944992    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:19.944999    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:19.945005    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:19.945009    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:19.945015    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:19.945020    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:19 GMT
	I0810 22:34:19.945532    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291-m02","uid":"2bb9e107-eff6-4e4c-b421-18b111080a9d","resourceVersion":"555","creationTimestamp":"2021-08-10T22:34:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021
-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f: [truncated 5451 chars]
	I0810 22:34:20.446647    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291-m02
	I0810 22:34:20.446674    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:20.446682    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:20.446688    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:20.450862    4347 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0810 22:34:20.450884    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:20.450901    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:20.450905    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:20.450910    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:20.450914    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:20.450919    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:20 GMT
	I0810 22:34:20.451592    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291-m02","uid":"2bb9e107-eff6-4e4c-b421-18b111080a9d","resourceVersion":"555","creationTimestamp":"2021-08-10T22:34:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021
-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f: [truncated 5451 chars]
	I0810 22:34:20.946943    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291-m02
	I0810 22:34:20.946971    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:20.946977    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:20.946981    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:20.949838    4347 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:34:20.949861    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:20.949868    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:20.949872    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:20.949877    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:20.949881    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:20 GMT
	I0810 22:34:20.949886    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:20.950372    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291-m02","uid":"2bb9e107-eff6-4e4c-b421-18b111080a9d","resourceVersion":"555","creationTimestamp":"2021-08-10T22:34:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021
-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f: [truncated 5451 chars]
	I0810 22:34:21.446883    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291-m02
	I0810 22:34:21.446905    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:21.446912    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:21.446916    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:21.449404    4347 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:34:21.449423    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:21.449429    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:21.449434    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:21.449438    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:21.449443    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:21 GMT
	I0810 22:34:21.449462    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:21.450290    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291-m02","uid":"2bb9e107-eff6-4e4c-b421-18b111080a9d","resourceVersion":"555","creationTimestamp":"2021-08-10T22:34:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021
-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f: [truncated 5451 chars]
	I0810 22:34:21.947032    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291-m02
	I0810 22:34:21.947060    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:21.947066    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:21.947070    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:21.951136    4347 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0810 22:34:21.951159    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:21.951166    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:21.951170    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:21.951175    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:21.951186    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:21.951191    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:21 GMT
	I0810 22:34:21.951430    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291-m02","uid":"2bb9e107-eff6-4e4c-b421-18b111080a9d","resourceVersion":"555","creationTimestamp":"2021-08-10T22:34:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021
-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f: [truncated 5451 chars]
	I0810 22:34:21.951742    4347 node_ready.go:58] node "multinode-20210810223223-30291-m02" has status "Ready":"False"
	I0810 22:34:22.446092    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291-m02
	I0810 22:34:22.446122    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:22.446128    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:22.446133    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:22.448504    4347 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:34:22.448522    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:22.448527    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:22 GMT
	I0810 22:34:22.448531    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:22.448534    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:22.448536    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:22.448539    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:22.448707    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291-m02","uid":"2bb9e107-eff6-4e4c-b421-18b111080a9d","resourceVersion":"555","creationTimestamp":"2021-08-10T22:34:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021
-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f: [truncated 5451 chars]
	I0810 22:34:22.946267    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291-m02
	I0810 22:34:22.946293    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:22.946299    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:22.946304    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:22.949527    4347 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:34:22.949552    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:22.949559    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:22.949564    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:22.949569    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:22.949573    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:22.949606    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:22 GMT
	I0810 22:34:22.950687    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291-m02","uid":"2bb9e107-eff6-4e4c-b421-18b111080a9d","resourceVersion":"555","creationTimestamp":"2021-08-10T22:34:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021
-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f: [truncated 5451 chars]
	I0810 22:34:23.446606    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291-m02
	I0810 22:34:23.446694    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:23.446702    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:23.446706    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:23.449234    4347 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:34:23.449256    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:23.449262    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:23.449265    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:23.449268    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:23.449271    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:23.449274    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:23 GMT
	I0810 22:34:23.449411    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291-m02","uid":"2bb9e107-eff6-4e4c-b421-18b111080a9d","resourceVersion":"570","creationTimestamp":"2021-08-10T22:34:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5560 chars]
	I0810 22:34:23.946751    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291-m02
	I0810 22:34:23.946777    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:23.946784    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:23.946788    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:23.950162    4347 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:34:23.950185    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:23.950192    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:23.950197    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:23.950209    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:23.950214    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:23 GMT
	I0810 22:34:23.950218    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:23.950354    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291-m02","uid":"2bb9e107-eff6-4e4c-b421-18b111080a9d","resourceVersion":"570","creationTimestamp":"2021-08-10T22:34:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5560 chars]
	I0810 22:34:24.446334    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291-m02
	I0810 22:34:24.446361    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:24.446366    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:24.446371    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:24.449289    4347 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:34:24.449303    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:24.449308    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:24.449313    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:24.449318    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:24.449322    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:24.449326    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:24 GMT
	I0810 22:34:24.449672    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291-m02","uid":"2bb9e107-eff6-4e4c-b421-18b111080a9d","resourceVersion":"570","creationTimestamp":"2021-08-10T22:34:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5560 chars]
	I0810 22:34:24.449928    4347 node_ready.go:58] node "multinode-20210810223223-30291-m02" has status "Ready":"False"
	I0810 22:34:24.946329    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291-m02
	I0810 22:34:24.946353    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:24.946359    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:24.946363    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:24.949643    4347 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:34:24.949666    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:24.949671    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:24 GMT
	I0810 22:34:24.949675    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:24.949679    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:24.949684    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:24.949690    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:24.949838    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291-m02","uid":"2bb9e107-eff6-4e4c-b421-18b111080a9d","resourceVersion":"570","creationTimestamp":"2021-08-10T22:34:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5560 chars]
	I0810 22:34:25.446488    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291-m02
	I0810 22:34:25.446515    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:25.446529    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:25.446535    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:25.455238    4347 round_trippers.go:457] Response Status: 200 OK in 8 milliseconds
	I0810 22:34:25.455261    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:25.455266    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:25.455270    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:25.455274    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:25.455277    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:25 GMT
	I0810 22:34:25.455280    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:25.455407    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291-m02","uid":"2bb9e107-eff6-4e4c-b421-18b111080a9d","resourceVersion":"570","creationTimestamp":"2021-08-10T22:34:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5560 chars]
	I0810 22:34:25.946504    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291-m02
	I0810 22:34:25.946533    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:25.946541    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:25.946547    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:25.950465    4347 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:34:25.950492    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:25.950496    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:25.950500    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:25.950503    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:25.950505    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:25.950511    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:25 GMT
	I0810 22:34:25.950591    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291-m02","uid":"2bb9e107-eff6-4e4c-b421-18b111080a9d","resourceVersion":"570","creationTimestamp":"2021-08-10T22:34:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5560 chars]
	I0810 22:34:26.446726    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291-m02
	I0810 22:34:26.446753    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:26.446759    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:26.446764    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:26.456593    4347 round_trippers.go:457] Response Status: 200 OK in 9 milliseconds
	I0810 22:34:26.456621    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:26.456627    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:26.456633    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:26.456638    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:26.456642    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:26 GMT
	I0810 22:34:26.456647    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:26.456804    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291-m02","uid":"2bb9e107-eff6-4e4c-b421-18b111080a9d","resourceVersion":"570","creationTimestamp":"2021-08-10T22:34:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5560 chars]
	I0810 22:34:26.457129    4347 node_ready.go:58] node "multinode-20210810223223-30291-m02" has status "Ready":"False"
	I0810 22:34:26.946418    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291-m02
	I0810 22:34:26.946441    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:26.946447    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:26.946451    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:26.949663    4347 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:34:26.949679    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:26.949683    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:26.949686    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:26.949689    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:26 GMT
	I0810 22:34:26.949692    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:26.949695    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:26.949882    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291-m02","uid":"2bb9e107-eff6-4e4c-b421-18b111080a9d","resourceVersion":"570","creationTimestamp":"2021-08-10T22:34:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5560 chars]
	I0810 22:34:27.446543    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291-m02
	I0810 22:34:27.446570    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:27.446576    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:27.446580    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:27.449983    4347 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:34:27.450002    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:27.450008    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:27.450014    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:27.450019    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:27.450023    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:27.450027    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:27 GMT
	I0810 22:34:27.451521    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291-m02","uid":"2bb9e107-eff6-4e4c-b421-18b111080a9d","resourceVersion":"570","creationTimestamp":"2021-08-10T22:34:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5560 chars]
	I0810 22:34:27.946159    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291-m02
	I0810 22:34:27.946186    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:27.946192    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:27.946196    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:27.949443    4347 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:34:27.949465    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:27.949471    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:27.949476    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:27.949479    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:27.949482    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:27.949486    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:27 GMT
	I0810 22:34:27.949575    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291-m02","uid":"2bb9e107-eff6-4e4c-b421-18b111080a9d","resourceVersion":"570","creationTimestamp":"2021-08-10T22:34:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5560 chars]
	I0810 22:34:28.446142    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291-m02
	I0810 22:34:28.446169    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:28.446176    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:28.446180    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:28.450639    4347 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0810 22:34:28.450657    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:28.450663    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:28.450668    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:28.450672    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:28.450677    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:28.450682    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:28 GMT
	I0810 22:34:28.451657    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291-m02","uid":"2bb9e107-eff6-4e4c-b421-18b111080a9d","resourceVersion":"570","creationTimestamp":"2021-08-10T22:34:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5560 chars]
	I0810 22:34:28.946854    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291-m02
	I0810 22:34:28.946878    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:28.946885    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:28.946889    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:28.950372    4347 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:34:28.950384    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:28.950388    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:28.950392    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:28.950396    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:28 GMT
	I0810 22:34:28.950399    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:28.950402    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:28.950500    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291-m02","uid":"2bb9e107-eff6-4e4c-b421-18b111080a9d","resourceVersion":"570","creationTimestamp":"2021-08-10T22:34:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach" [truncated 5560 chars]
	I0810 22:34:28.950822    4347 node_ready.go:58] node "multinode-20210810223223-30291-m02" has status "Ready":"False"
	I0810 22:34:29.446578    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291-m02
	I0810 22:34:29.446602    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:29.446608    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:29.446612    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:29.450473    4347 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:34:29.450487    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:29.450493    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:29.450497    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:29.450502    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:29.450514    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:29.450518    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:29 GMT
	I0810 22:34:29.450703    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291-m02","uid":"2bb9e107-eff6-4e4c-b421-18b111080a9d","resourceVersion":"583","creationTimestamp":"2021-08-10T22:34:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metada
ta":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{" [truncated 5733 chars]
	I0810 22:34:29.450984    4347 node_ready.go:49] node "multinode-20210810223223-30291-m02" has status "Ready":"True"
	I0810 22:34:29.451005    4347 node_ready.go:38] duration metric: took 9.509386037s waiting for node "multinode-20210810223223-30291-m02" to be "Ready" ...
	I0810 22:34:29.451017    4347 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0810 22:34:29.451103    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods
	I0810 22:34:29.451116    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:29.451123    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:29.451129    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:29.457673    4347 round_trippers.go:457] Response Status: 200 OK in 6 milliseconds
	I0810 22:34:29.457690    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:29.457696    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:29.457700    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:29.457704    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:29 GMT
	I0810 22:34:29.457708    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:29.457712    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:29.461022    4347 request.go:1123] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"584"},"items":[{"metadata":{"name":"coredns-558bd4d5db-v7x6p","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"0c4eb44b-9d97-4934-aa16-8b8625bf04cf","resourceVersion":"493","creationTimestamp":"2021-08-10T22:33:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"28236a2d-7d69-4771-a778-5fae1cd7d05f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:33:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"28236a2d-7d69-4771-a778-5fae1cd7d05f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller"
:{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k: [truncated 66681 chars]
	I0810 22:34:29.463278    4347 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-v7x6p" in "kube-system" namespace to be "Ready" ...
	I0810 22:34:29.463397    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-v7x6p
	I0810 22:34:29.463409    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:29.463416    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:29.463453    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:29.466000    4347 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:34:29.466016    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:29.466022    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:29.466026    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:29.466030    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:29.466035    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:29.466039    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:29 GMT
	I0810 22:34:29.466210    4347 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-v7x6p","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"0c4eb44b-9d97-4934-aa16-8b8625bf04cf","resourceVersion":"493","creationTimestamp":"2021-08-10T22:33:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"28236a2d-7d69-4771-a778-5fae1cd7d05f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:33:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"28236a2d-7d69-4771-a778-5fae1cd7d05f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5733 chars]
	I0810 22:34:29.466491    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291
	I0810 22:34:29.466502    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:29.466507    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:29.466512    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:29.468870    4347 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:34:29.468890    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:29.468896    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:29.468900    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:29.468905    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:29.468909    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:29.468914    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:29 GMT
	I0810 22:34:29.469187    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","resourceVersion":"406","creationTimestamp":"2021-08-10T22:33:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223223-30291","minikube.k8s.io/updated_at":"2021_08_10T22_33_20_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22 [truncated 6553 chars]
	I0810 22:34:29.469392    4347 pod_ready.go:92] pod "coredns-558bd4d5db-v7x6p" in "kube-system" namespace has status "Ready":"True"
	I0810 22:34:29.469404    4347 pod_ready.go:81] duration metric: took 6.097748ms waiting for pod "coredns-558bd4d5db-v7x6p" in "kube-system" namespace to be "Ready" ...
	I0810 22:34:29.469413    4347 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-20210810223223-30291" in "kube-system" namespace to be "Ready" ...
	I0810 22:34:29.469452    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210810223223-30291
	I0810 22:34:29.469460    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:29.469464    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:29.469468    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:29.471255    4347 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:34:29.471270    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:29.471276    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:29.471280    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:29.471285    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:29.471289    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:29 GMT
	I0810 22:34:29.471296    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:29.471528    4347 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210810223223-30291","namespace":"kube-system","uid":"8498143e-4386-44bc-9541-3193bd504c1d","resourceVersion":"489","creationTimestamp":"2021-08-10T22:33:25Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.50.32:2379","kubernetes.io/config.hash":"ee4e4232c1192224bf90edfa1030cde5","kubernetes.io/config.mirror":"ee4e4232c1192224bf90edfa1030cde5","kubernetes.io/config.seen":"2021-08-10T22:33:24.968043630Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:33:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.hash [truncated 5569 chars]
	I0810 22:34:29.471869    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291
	I0810 22:34:29.471891    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:29.471898    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:29.471903    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:29.473894    4347 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:34:29.473910    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:29.473915    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:29.473919    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:29.473922    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:29 GMT
	I0810 22:34:29.473925    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:29.473927    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:29.474170    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","resourceVersion":"406","creationTimestamp":"2021-08-10T22:33:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223223-30291","minikube.k8s.io/updated_at":"2021_08_10T22_33_20_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22 [truncated 6553 chars]
	I0810 22:34:29.474420    4347 pod_ready.go:92] pod "etcd-multinode-20210810223223-30291" in "kube-system" namespace has status "Ready":"True"
	I0810 22:34:29.474433    4347 pod_ready.go:81] duration metric: took 5.014362ms waiting for pod "etcd-multinode-20210810223223-30291" in "kube-system" namespace to be "Ready" ...
	I0810 22:34:29.474444    4347 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-20210810223223-30291" in "kube-system" namespace to be "Ready" ...
	I0810 22:34:29.474485    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20210810223223-30291
	I0810 22:34:29.474493    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:29.474497    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:29.474501    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:29.476940    4347 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:34:29.476953    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:29.476957    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:29.476961    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:29.476963    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:29.476967    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:29.476969    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:29 GMT
	I0810 22:34:29.477308    4347 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20210810223223-30291","namespace":"kube-system","uid":"1c83d52d-8a08-42be-9c8a-6420a1bdb75c","resourceVersion":"317","creationTimestamp":"2021-08-10T22:33:17Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.50.32:8443","kubernetes.io/config.hash":"9099813bef5425d688516ac434247f4d","kubernetes.io/config.mirror":"9099813bef5425d688516ac434247f4d","kubernetes.io/config.seen":"2021-08-10T22:33:07.454085484Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:33:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address [truncated 7249 chars]
	I0810 22:34:29.477637    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291
	I0810 22:34:29.477652    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:29.477658    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:29.477664    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:29.479658    4347 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:34:29.479670    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:29.479680    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:29.479687    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:29.479691    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:29.479695    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:29.479698    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:29 GMT
	I0810 22:34:29.480212    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","resourceVersion":"406","creationTimestamp":"2021-08-10T22:33:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223223-30291","minikube.k8s.io/updated_at":"2021_08_10T22_33_20_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22 [truncated 6553 chars]
	I0810 22:34:29.480444    4347 pod_ready.go:92] pod "kube-apiserver-multinode-20210810223223-30291" in "kube-system" namespace has status "Ready":"True"
	I0810 22:34:29.480457    4347 pod_ready.go:81] duration metric: took 6.006922ms waiting for pod "kube-apiserver-multinode-20210810223223-30291" in "kube-system" namespace to be "Ready" ...
	I0810 22:34:29.480468    4347 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-20210810223223-30291" in "kube-system" namespace to be "Ready" ...
	I0810 22:34:29.480512    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210810223223-30291
	I0810 22:34:29.480524    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:29.480533    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:29.480544    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:29.483629    4347 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:34:29.483645    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:29.483650    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:29.483655    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:29.483660    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:29.483664    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:29.483668    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:29 GMT
	I0810 22:34:29.484145    4347 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20210810223223-30291","namespace":"kube-system","uid":"9305e895-2f70-44a4-8319-6f50b7e7a0ce","resourceVersion":"456","creationTimestamp":"2021-08-10T22:33:25Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"77761625d867cf54e5130d9def04b55c","kubernetes.io/config.mirror":"77761625d867cf54e5130d9def04b55c","kubernetes.io/config.seen":"2021-08-10T22:33:24.968061293Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:33:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi
g.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config.s [truncated 6810 chars]
	I0810 22:34:29.484456    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291
	I0810 22:34:29.484472    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:29.484477    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:29.484481    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:29.486294    4347 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0810 22:34:29.486307    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:29.486313    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:29.486318    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:29.486322    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:29.486327    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:29.486332    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:29 GMT
	I0810 22:34:29.486566    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","resourceVersion":"406","creationTimestamp":"2021-08-10T22:33:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223223-30291","minikube.k8s.io/updated_at":"2021_08_10T22_33_20_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22 [truncated 6553 chars]
	I0810 22:34:29.486830    4347 pod_ready.go:92] pod "kube-controller-manager-multinode-20210810223223-30291" in "kube-system" namespace has status "Ready":"True"
	I0810 22:34:29.486845    4347 pod_ready.go:81] duration metric: took 6.367836ms waiting for pod "kube-controller-manager-multinode-20210810223223-30291" in "kube-system" namespace to be "Ready" ...
	I0810 22:34:29.486853    4347 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6t6mb" in "kube-system" namespace to be "Ready" ...
	I0810 22:34:29.647221    4347 request.go:600] Waited for 160.30517ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6t6mb
	I0810 22:34:29.647293    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6t6mb
	I0810 22:34:29.647309    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:29.647317    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:29.647324    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:29.650249    4347 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:34:29.650265    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:29.650272    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:29.650276    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:29.650281    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:29 GMT
	I0810 22:34:29.650287    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:29.650291    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:29.650445    4347 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6t6mb","generateName":"kube-proxy-","namespace":"kube-system","uid":"22159811-3bd2-4e80-94b1-f3bef037909c","resourceVersion":"565","creationTimestamp":"2021-08-10T22:34:19Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"3eb0224f-214a-4d5e-ba63-b7b722448d21","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3eb0224f-214a-4d5e-ba63-b7b722448d21\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 5770 chars]
	I0810 22:34:29.847176    4347 request.go:600] Waited for 196.356548ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291-m02
	I0810 22:34:29.847236    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291-m02
	I0810 22:34:29.847241    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:29.847247    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:29.847251    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:29.851548    4347 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0810 22:34:29.851564    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:29.851569    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:29.851574    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:29.851578    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:29.851582    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:29.851587    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:29 GMT
	I0810 22:34:29.851741    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291-m02","uid":"2bb9e107-eff6-4e4c-b421-18b111080a9d","resourceVersion":"583","creationTimestamp":"2021-08-10T22:34:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:34:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metada
ta":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{" [truncated 5733 chars]
	I0810 22:34:29.851998    4347 pod_ready.go:92] pod "kube-proxy-6t6mb" in "kube-system" namespace has status "Ready":"True"
	I0810 22:34:29.852014    4347 pod_ready.go:81] duration metric: took 365.151653ms waiting for pod "kube-proxy-6t6mb" in "kube-system" namespace to be "Ready" ...
	I0810 22:34:29.852026    4347 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lmhw9" in "kube-system" namespace to be "Ready" ...
	I0810 22:34:30.047441    4347 request.go:600] Waited for 195.34571ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lmhw9
	I0810 22:34:30.047512    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lmhw9
	I0810 22:34:30.047517    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:30.047522    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:30.047526    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:30.051037    4347 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:34:30.051058    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:30.051066    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:30.051074    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:30.051078    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:30.051083    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:30.051087    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:30 GMT
	I0810 22:34:30.051362    4347 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lmhw9","generateName":"kube-proxy-","namespace":"kube-system","uid":"2a10306d-93c9-4aac-b47a-8bd1d406882c","resourceVersion":"470","creationTimestamp":"2021-08-10T22:33:33Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"3eb0224f-214a-4d5e-ba63-b7b722448d21","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:33:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3eb0224f-214a-4d5e-ba63-b7b722448d21\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 5758 chars]
	I0810 22:34:30.247067    4347 request.go:600] Waited for 195.352918ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291
	I0810 22:34:30.247139    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291
	I0810 22:34:30.247146    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:30.247154    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:30.247159    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:30.250034    4347 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0810 22:34:30.250054    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:30.250059    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:30 GMT
	I0810 22:34:30.250062    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:30.250065    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:30.250068    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:30.250071    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:30.250329    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","resourceVersion":"406","creationTimestamp":"2021-08-10T22:33:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223223-30291","minikube.k8s.io/updated_at":"2021_08_10T22_33_20_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22 [truncated 6553 chars]
	I0810 22:34:30.250647    4347 pod_ready.go:92] pod "kube-proxy-lmhw9" in "kube-system" namespace has status "Ready":"True"
	I0810 22:34:30.250658    4347 pod_ready.go:81] duration metric: took 398.625584ms waiting for pod "kube-proxy-lmhw9" in "kube-system" namespace to be "Ready" ...
	I0810 22:34:30.250667    4347 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-20210810223223-30291" in "kube-system" namespace to be "Ready" ...
	I0810 22:34:30.447129    4347 request.go:600] Waited for 196.375462ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210810223223-30291
	I0810 22:34:30.447205    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210810223223-30291
	I0810 22:34:30.447213    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:30.447228    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:30.447241    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:30.450665    4347 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:34:30.450685    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:30.450690    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:30.450693    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:30.450696    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:30 GMT
	I0810 22:34:30.450699    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:30.450703    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:30.451886    4347 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20210810223223-30291","namespace":"kube-system","uid":"5a7e6aa0-3e54-4877-a2b0-79df1e84d9f7","resourceVersion":"295","creationTimestamp":"2021-08-10T22:33:25Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"18b06801ebf2048d768b73e098da8a40","kubernetes.io/config.mirror":"18b06801ebf2048d768b73e098da8a40","kubernetes.io/config.seen":"2021-08-10T22:33:24.968063579Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-10T22:33:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:ku
bernetes.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labe [truncated 4540 chars]
	I0810 22:34:30.647608    4347 request.go:600] Waited for 195.351688ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291
	I0810 22:34:30.647669    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes/multinode-20210810223223-30291
	I0810 22:34:30.647676    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:30.647683    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:30.647691    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:30.696789    4347 round_trippers.go:457] Response Status: 200 OK in 49 milliseconds
	I0810 22:34:30.696818    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:30.696825    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:30.696828    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:30 GMT
	I0810 22:34:30.696832    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:30.696835    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:30.696838    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:30.697222    4347 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","resourceVersion":"406","creationTimestamp":"2021-08-10T22:33:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223223-30291","minikube.k8s.io/updated_at":"2021_08_10T22_33_20_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager"
:"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-10T22 [truncated 6553 chars]
	I0810 22:34:30.697665    4347 pod_ready.go:92] pod "kube-scheduler-multinode-20210810223223-30291" in "kube-system" namespace has status "Ready":"True"
	I0810 22:34:30.697686    4347 pod_ready.go:81] duration metric: took 447.011787ms waiting for pod "kube-scheduler-multinode-20210810223223-30291" in "kube-system" namespace to be "Ready" ...
	I0810 22:34:30.697702    4347 pod_ready.go:38] duration metric: took 1.246665582s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0810 22:34:30.697735    4347 system_svc.go:44] waiting for kubelet service to be running ....
	I0810 22:34:30.697793    4347 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0810 22:34:30.709069    4347 system_svc.go:56] duration metric: took 11.32472ms WaitForService to wait for kubelet.
	I0810 22:34:30.709095    4347 kubeadm.go:547] duration metric: took 10.78256952s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0810 22:34:30.709123    4347 node_conditions.go:102] verifying NodePressure condition ...
	I0810 22:34:30.847492    4347 request.go:600] Waited for 138.293842ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.50.32:8443/api/v1/nodes
	I0810 22:34:30.847563    4347 round_trippers.go:432] GET https://192.168.50.32:8443/api/v1/nodes
	I0810 22:34:30.847598    4347 round_trippers.go:438] Request Headers:
	I0810 22:34:30.847615    4347 round_trippers.go:442]     Accept: application/json, */*
	I0810 22:34:30.847626    4347 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0810 22:34:30.850811    4347 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0810 22:34:30.850837    4347 round_trippers.go:460] Response Headers:
	I0810 22:34:30.850845    4347 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: f7206f91-4d74-4030-a8f6-461e9922fe14
	I0810 22:34:30.850851    4347 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 9a6fa272-7ba3-490c-9a82-bf5b9ed6e538
	I0810 22:34:30.850855    4347 round_trippers.go:463]     Date: Tue, 10 Aug 2021 22:34:30 GMT
	I0810 22:34:30.850859    4347 round_trippers.go:463]     Cache-Control: no-cache, private
	I0810 22:34:30.850865    4347 round_trippers.go:463]     Content-Type: application/json
	I0810 22:34:30.851288    4347 request.go:1123] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"585"},"items":[{"metadata":{"name":"multinode-20210810223223-30291","uid":"d71d0c6c-2cb3-4f14-a2fd-ba842cf0c5ee","resourceVersion":"406","creationTimestamp":"2021-08-10T22:33:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210810223223-30291","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210810223223-30291","minikube.k8s.io/updated_at":"2021_08_10T22_33_20_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed
-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operatio [truncated 13331 chars]
	I0810 22:34:30.851857    4347 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0810 22:34:30.851878    4347 node_conditions.go:123] node cpu capacity is 2
	I0810 22:34:30.851895    4347 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0810 22:34:30.851902    4347 node_conditions.go:123] node cpu capacity is 2
	I0810 22:34:30.851908    4347 node_conditions.go:105] duration metric: took 142.779874ms to run NodePressure ...
	I0810 22:34:30.851926    4347 start.go:231] waiting for startup goroutines ...
	I0810 22:34:30.894335    4347 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0810 22:34:30.896622    4347 out.go:177] * Done! kubectl is now configured to use "multinode-20210810223223-30291" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Tue 2021-08-10 22:32:34 UTC, end at Tue 2021-08-10 22:38:44 UTC. --
	Aug 10 22:38:44 multinode-20210810223223-30291 crio[2074]: time="2021-08-10 22:38:44.205755734Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=bb5980e5-192c-4256-ad5b-6fb6b668d9db name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:38:44 multinode-20210810223223-30291 crio[2074]: time="2021-08-10 22:38:44.205959958Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0546f83cfad21f66789e8c6c45beb48deb7e73bfb0f09ffb76be32add922c472,PodSandboxId:77754eaffbc8999d0eee55167b99849548aeceee68e42223821dde6df9b71f5f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,State:CONTAINER_RUNNING,CreatedAt:1628634877804702101,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-84b6686758-5h7gq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d11e1a39-be6d-4d16-9086-b6cfef5e1644,},Annotations:map[string]string{io.kubernetes.container.hash: c6d8bc46,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f90af0c728851e601b452c99af417f5bfcd580b5355c6e8bb56fa2c701aacb7,PodSandboxId:f2a2f0197075a11d02804fe88085db4f127993180baa17934f4f244fbc5c7a8c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1,State:CONTAINER_RUNNING,CreatedAt:1628634817927932495,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2bvdc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c26b9021-1d86-475c-ac98-6f7e7e07c434,},Annotations:map[string]string{io.kubernetes.container.hash: 4970c570,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfebebba90bf30cb1d511f33375a3e9b6bd91ddc07cabfe5b5589a14fe35685c,PodSandboxId:552c60b72eff7ec71d413674f557175d18290855b2fbb557210d2ac15208a685,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628634817667541294,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af946d1d-fa19-47fa-8c83-fd1d06a0e788,},Annotations:map[string]string{io.kubernetes.container.hash: 31e4f950,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea6ca1aa48dc2cf026c5ed8947bfb8ef90ac7ac3d0f88bee1bc2b22511e83b4,PodSandboxId:fbff0e39503c70d012b363b8fe670ce8a9f3577e7e15e998600a2fa8b6670222,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628634816892860051,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-v7x6p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c4eb44b-9d97-4934-aa16-8b8625bf04cf,},Annotations:map[string]string{io.kubernetes.container.hash: e2b5faf8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dn
s-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:450a88f25c78b26558e59b4e708a2af565fa95ffdc04a9cb0e91b46224c6573d,PodSandboxId:2b47855215e9f9a81ed9a31f5e73478cd23463ae8391753d3b15909daf03e2bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628634815170944830,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lmhw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a10306d-93c9-4aac-b47a-8bd1d
406882c,},Annotations:map[string]string{io.kubernetes.container.hash: a3c465e1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b071c54f171f451e6e33cdbfbddab63c53133700bfe5dbff374f9d207f00facf,PodSandboxId:8370bc0b5b9a970d4007204b3b67a7266d452619c4263328f4a8403fa748ea90,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628634790862838159,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18b06801ebf2048d768b73e098da
8a40,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d8b11c78d38735ea0d95a1ef9a0812af3d2ecb679a807514125823d90f0f456,PodSandboxId:76613b5ae865f77eb677f7fc6c1d96c89c385010f19742b2a2daaf3b3d536da2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628634790776364691,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 77761625d867cf54e5130d9def04b55c,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b22cef1088cdfa5b9c5d2f6974fa60c0edf2132c0881c9c61984c16c2864f97,PodSandboxId:2d4522aed7a375e49e9a8b17d6da385d86a65fcbac37cb6f75c688d6842598cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628634790563156773,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
099813bef5425d688516ac434247f4d,},Annotations:map[string]string{io.kubernetes.container.hash: 96b9c909,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d359668a0c2391fdfb2cd9dcf67ffaf1c30ce8549c23d047c95ef4a75f2ada,PodSandboxId:42a0fc97ea8eb49918d4f19e7d3b3837f13d12aa3c1b7f8c191d955c9ae417e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628634790483989854,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee4e4232c1192224bf90edfa1030cde5,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 755f249c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=bb5980e5-192c-4256-ad5b-6fb6b668d9db name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:38:44 multinode-20210810223223-30291 crio[2074]: time="2021-08-10 22:38:44.244769157Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ed706375-132e-4364-b9b9-8c68366cd333 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:38:44 multinode-20210810223223-30291 crio[2074]: time="2021-08-10 22:38:44.244920836Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ed706375-132e-4364-b9b9-8c68366cd333 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:38:44 multinode-20210810223223-30291 crio[2074]: time="2021-08-10 22:38:44.245124593Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0546f83cfad21f66789e8c6c45beb48deb7e73bfb0f09ffb76be32add922c472,PodSandboxId:77754eaffbc8999d0eee55167b99849548aeceee68e42223821dde6df9b71f5f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,State:CONTAINER_RUNNING,CreatedAt:1628634877804702101,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-84b6686758-5h7gq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d11e1a39-be6d-4d16-9086-b6cfef5e1644,},Annotations:map[string]string{io.kubernetes.container.hash: c6d8bc46,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f90af0c728851e601b452c99af417f5bfcd580b5355c6e8bb56fa2c701aacb7,PodSandboxId:f2a2f0197075a11d02804fe88085db4f127993180baa17934f4f244fbc5c7a8c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1,State:CONTAINER_RUNNING,CreatedAt:1628634817927932495,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2bvdc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c26b9021-1d86-475c-ac98-6f7e7e07c434,},Annotations:map[string]string{io.kubernetes.container.hash: 4970c570,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfebebba90bf30cb1d511f33375a3e9b6bd91ddc07cabfe5b5589a14fe35685c,PodSandboxId:552c60b72eff7ec71d413674f557175d18290855b2fbb557210d2ac15208a685,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628634817667541294,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af946d1d-fa19-47fa-8c83-fd1d06a0e788,},Annotations:map[string]string{io.kubernetes.container.hash: 31e4f950,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea6ca1aa48dc2cf026c5ed8947bfb8ef90ac7ac3d0f88bee1bc2b22511e83b4,PodSandboxId:fbff0e39503c70d012b363b8fe670ce8a9f3577e7e15e998600a2fa8b6670222,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628634816892860051,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-v7x6p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c4eb44b-9d97-4934-aa16-8b8625bf04cf,},Annotations:map[string]string{io.kubernetes.container.hash: e2b5faf8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dn
s-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:450a88f25c78b26558e59b4e708a2af565fa95ffdc04a9cb0e91b46224c6573d,PodSandboxId:2b47855215e9f9a81ed9a31f5e73478cd23463ae8391753d3b15909daf03e2bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628634815170944830,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lmhw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a10306d-93c9-4aac-b47a-8bd1d
406882c,},Annotations:map[string]string{io.kubernetes.container.hash: a3c465e1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b071c54f171f451e6e33cdbfbddab63c53133700bfe5dbff374f9d207f00facf,PodSandboxId:8370bc0b5b9a970d4007204b3b67a7266d452619c4263328f4a8403fa748ea90,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628634790862838159,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18b06801ebf2048d768b73e098da
8a40,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d8b11c78d38735ea0d95a1ef9a0812af3d2ecb679a807514125823d90f0f456,PodSandboxId:76613b5ae865f77eb677f7fc6c1d96c89c385010f19742b2a2daaf3b3d536da2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628634790776364691,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 77761625d867cf54e5130d9def04b55c,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b22cef1088cdfa5b9c5d2f6974fa60c0edf2132c0881c9c61984c16c2864f97,PodSandboxId:2d4522aed7a375e49e9a8b17d6da385d86a65fcbac37cb6f75c688d6842598cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628634790563156773,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
099813bef5425d688516ac434247f4d,},Annotations:map[string]string{io.kubernetes.container.hash: 96b9c909,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d359668a0c2391fdfb2cd9dcf67ffaf1c30ce8549c23d047c95ef4a75f2ada,PodSandboxId:42a0fc97ea8eb49918d4f19e7d3b3837f13d12aa3c1b7f8c191d955c9ae417e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628634790483989854,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee4e4232c1192224bf90edfa1030cde5,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 755f249c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ed706375-132e-4364-b9b9-8c68366cd333 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:38:44 multinode-20210810223223-30291 crio[2074]: time="2021-08-10 22:38:44.291685995Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=fe957e33-4236-44d9-a6ee-724ab98d5eb6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:38:44 multinode-20210810223223-30291 crio[2074]: time="2021-08-10 22:38:44.291824554Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=fe957e33-4236-44d9-a6ee-724ab98d5eb6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:38:44 multinode-20210810223223-30291 crio[2074]: time="2021-08-10 22:38:44.292011478Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0546f83cfad21f66789e8c6c45beb48deb7e73bfb0f09ffb76be32add922c472,PodSandboxId:77754eaffbc8999d0eee55167b99849548aeceee68e42223821dde6df9b71f5f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,State:CONTAINER_RUNNING,CreatedAt:1628634877804702101,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-84b6686758-5h7gq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d11e1a39-be6d-4d16-9086-b6cfef5e1644,},Annotations:map[string]string{io.kubernetes.container.hash: c6d8bc46,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f90af0c728851e601b452c99af417f5bfcd580b5355c6e8bb56fa2c701aacb7,PodSandboxId:f2a2f0197075a11d02804fe88085db4f127993180baa17934f4f244fbc5c7a8c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1,State:CONTAINER_RUNNING,CreatedAt:1628634817927932495,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2bvdc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c26b9021-1d86-475c-ac98-6f7e7e07c434,},Annotations:map[string]string{io.kubernetes.container.hash: 4970c570,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfebebba90bf30cb1d511f33375a3e9b6bd91ddc07cabfe5b5589a14fe35685c,PodSandboxId:552c60b72eff7ec71d413674f557175d18290855b2fbb557210d2ac15208a685,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628634817667541294,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af946d1d-fa19-47fa-8c83-fd1d06a0e788,},Annotations:map[string]string{io.kubernetes.container.hash: 31e4f950,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea6ca1aa48dc2cf026c5ed8947bfb8ef90ac7ac3d0f88bee1bc2b22511e83b4,PodSandboxId:fbff0e39503c70d012b363b8fe670ce8a9f3577e7e15e998600a2fa8b6670222,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628634816892860051,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-v7x6p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c4eb44b-9d97-4934-aa16-8b8625bf04cf,},Annotations:map[string]string{io.kubernetes.container.hash: e2b5faf8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dn
s-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:450a88f25c78b26558e59b4e708a2af565fa95ffdc04a9cb0e91b46224c6573d,PodSandboxId:2b47855215e9f9a81ed9a31f5e73478cd23463ae8391753d3b15909daf03e2bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628634815170944830,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lmhw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a10306d-93c9-4aac-b47a-8bd1d
406882c,},Annotations:map[string]string{io.kubernetes.container.hash: a3c465e1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b071c54f171f451e6e33cdbfbddab63c53133700bfe5dbff374f9d207f00facf,PodSandboxId:8370bc0b5b9a970d4007204b3b67a7266d452619c4263328f4a8403fa748ea90,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628634790862838159,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18b06801ebf2048d768b73e098da
8a40,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d8b11c78d38735ea0d95a1ef9a0812af3d2ecb679a807514125823d90f0f456,PodSandboxId:76613b5ae865f77eb677f7fc6c1d96c89c385010f19742b2a2daaf3b3d536da2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628634790776364691,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 77761625d867cf54e5130d9def04b55c,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b22cef1088cdfa5b9c5d2f6974fa60c0edf2132c0881c9c61984c16c2864f97,PodSandboxId:2d4522aed7a375e49e9a8b17d6da385d86a65fcbac37cb6f75c688d6842598cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628634790563156773,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
099813bef5425d688516ac434247f4d,},Annotations:map[string]string{io.kubernetes.container.hash: 96b9c909,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d359668a0c2391fdfb2cd9dcf67ffaf1c30ce8549c23d047c95ef4a75f2ada,PodSandboxId:42a0fc97ea8eb49918d4f19e7d3b3837f13d12aa3c1b7f8c191d955c9ae417e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628634790483989854,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee4e4232c1192224bf90edfa1030cde5,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 755f249c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=fe957e33-4236-44d9-a6ee-724ab98d5eb6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:38:44 multinode-20210810223223-30291 crio[2074]: time="2021-08-10 22:38:44.347925653Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d53ae108-d887-42f8-892f-099df58576b8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:38:44 multinode-20210810223223-30291 crio[2074]: time="2021-08-10 22:38:44.348078651Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d53ae108-d887-42f8-892f-099df58576b8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:38:44 multinode-20210810223223-30291 crio[2074]: time="2021-08-10 22:38:44.348248992Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0546f83cfad21f66789e8c6c45beb48deb7e73bfb0f09ffb76be32add922c472,PodSandboxId:77754eaffbc8999d0eee55167b99849548aeceee68e42223821dde6df9b71f5f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,State:CONTAINER_RUNNING,CreatedAt:1628634877804702101,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-84b6686758-5h7gq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d11e1a39-be6d-4d16-9086-b6cfef5e1644,},Annotations:map[string]string{io.kubernetes.container.hash: c6d8bc46,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f90af0c728851e601b452c99af417f5bfcd580b5355c6e8bb56fa2c701aacb7,PodSandboxId:f2a2f0197075a11d02804fe88085db4f127993180baa17934f4f244fbc5c7a8c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1,State:CONTAINER_RUNNING,CreatedAt:1628634817927932495,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2bvdc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c26b9021-1d86-475c-ac98-6f7e7e07c434,},Annotations:map[string]string{io.kubernetes.container.hash: 4970c570,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfebebba90bf30cb1d511f33375a3e9b6bd91ddc07cabfe5b5589a14fe35685c,PodSandboxId:552c60b72eff7ec71d413674f557175d18290855b2fbb557210d2ac15208a685,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628634817667541294,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af946d1d-fa19-47fa-8c83-fd1d06a0e788,},Annotations:map[string]string{io.kubernetes.container.hash: 31e4f950,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea6ca1aa48dc2cf026c5ed8947bfb8ef90ac7ac3d0f88bee1bc2b22511e83b4,PodSandboxId:fbff0e39503c70d012b363b8fe670ce8a9f3577e7e15e998600a2fa8b6670222,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628634816892860051,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-v7x6p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c4eb44b-9d97-4934-aa16-8b8625bf04cf,},Annotations:map[string]string{io.kubernetes.container.hash: e2b5faf8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dn
s-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:450a88f25c78b26558e59b4e708a2af565fa95ffdc04a9cb0e91b46224c6573d,PodSandboxId:2b47855215e9f9a81ed9a31f5e73478cd23463ae8391753d3b15909daf03e2bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628634815170944830,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lmhw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a10306d-93c9-4aac-b47a-8bd1d
406882c,},Annotations:map[string]string{io.kubernetes.container.hash: a3c465e1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b071c54f171f451e6e33cdbfbddab63c53133700bfe5dbff374f9d207f00facf,PodSandboxId:8370bc0b5b9a970d4007204b3b67a7266d452619c4263328f4a8403fa748ea90,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628634790862838159,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18b06801ebf2048d768b73e098da
8a40,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d8b11c78d38735ea0d95a1ef9a0812af3d2ecb679a807514125823d90f0f456,PodSandboxId:76613b5ae865f77eb677f7fc6c1d96c89c385010f19742b2a2daaf3b3d536da2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628634790776364691,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 77761625d867cf54e5130d9def04b55c,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b22cef1088cdfa5b9c5d2f6974fa60c0edf2132c0881c9c61984c16c2864f97,PodSandboxId:2d4522aed7a375e49e9a8b17d6da385d86a65fcbac37cb6f75c688d6842598cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628634790563156773,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
099813bef5425d688516ac434247f4d,},Annotations:map[string]string{io.kubernetes.container.hash: 96b9c909,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d359668a0c2391fdfb2cd9dcf67ffaf1c30ce8549c23d047c95ef4a75f2ada,PodSandboxId:42a0fc97ea8eb49918d4f19e7d3b3837f13d12aa3c1b7f8c191d955c9ae417e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628634790483989854,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee4e4232c1192224bf90edfa1030cde5,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 755f249c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d53ae108-d887-42f8-892f-099df58576b8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:38:44 multinode-20210810223223-30291 crio[2074]: time="2021-08-10 22:38:44.376831205Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=b85e4c1d-f0c4-4ffb-80c8-9961135cdbbf name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Aug 10 22:38:44 multinode-20210810223223-30291 crio[2074]: time="2021-08-10 22:38:44.378190246Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:77754eaffbc8999d0eee55167b99849548aeceee68e42223821dde6df9b71f5f,Metadata:&PodSandboxMetadata{Name:busybox-84b6686758-5h7gq,Uid:d11e1a39-be6d-4d16-9086-b6cfef5e1644,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1628634873698805971,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-84b6686758-5h7gq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d11e1a39-be6d-4d16-9086-b6cfef5e1644,pod-template-hash: 84b6686758,},Annotations:map[string]string{kubernetes.io/config.seen: 2021-08-10T22:34:31.844571684Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:552c60b72eff7ec71d413674f557175d18290855b2fbb557210d2ac15208a685,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:af946d1d-fa19-47fa-8c83-fd1d06a0e788,Namespace:kube-system
,Attempt:0,},State:SANDBOX_READY,CreatedAt:1628634816888082643,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af946d1d-fa19-47fa-8c83-fd1d06a0e788,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\
":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2021-08-10T22:33:36.208400838Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fbff0e39503c70d012b363b8fe670ce8a9f3577e7e15e998600a2fa8b6670222,Metadata:&PodSandboxMetadata{Name:coredns-558bd4d5db-v7x6p,Uid:0c4eb44b-9d97-4934-aa16-8b8625bf04cf,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1628634815923185302,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-558bd4d5db-v7x6p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c4eb44b-9d97-4934-aa16-8b8625bf04cf,k8s-app: kube-dns,pod-template-hash: 558bd4d5db,},Annotations:map[string]string{kubernetes.io/config.seen: 2021-08-10T22:33:34.035055859Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2b47855215e9f9a81ed9a31f5e73478cd23463ae8391753d3b15909daf03e2bf,Metadata:&PodSandboxMetadata{Name:kube-proxy-lmhw9,Uid:2a10306d-93c9-4aac-b47a-8bd1d406882c,Namespace
:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1628634814301230961,Labels:map[string]string{controller-revision-hash: 7cdcb64568,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-lmhw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a10306d-93c9-4aac-b47a-8bd1d406882c,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2021-08-10T22:33:33.876387957Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f2a2f0197075a11d02804fe88085db4f127993180baa17934f4f244fbc5c7a8c,Metadata:&PodSandboxMetadata{Name:kindnet-2bvdc,Uid:c26b9021-1d86-475c-ac98-6f7e7e07c434,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1628634814261575992,Labels:map[string]string{app: kindnet,controller-revision-hash: 694b6fb659,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-2bvdc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c26b9021-1d86-475c-ac98-6f7e7e07c434,k8s-app: kindnet,pod
-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2021-08-10T22:33:33.859000126Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:42a0fc97ea8eb49918d4f19e7d3b3837f13d12aa3c1b7f8c191d955c9ae417e3,Metadata:&PodSandboxMetadata{Name:etcd-multinode-20210810223223-30291,Uid:ee4e4232c1192224bf90edfa1030cde5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1628634789523165030,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee4e4232c1192224bf90edfa1030cde5,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.32:2379,kubernetes.io/config.hash: ee4e4232c1192224bf90edfa1030cde5,kubernetes.io/config.seen: 2021-08-10T22:33:07.454065362Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2d4522aed7a375e49e9a8b17d6d
a385d86a65fcbac37cb6f75c688d6842598cd,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-20210810223223-30291,Uid:9099813bef5425d688516ac434247f4d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1628634789519001825,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9099813bef5425d688516ac434247f4d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.32:8443,kubernetes.io/config.hash: 9099813bef5425d688516ac434247f4d,kubernetes.io/config.seen: 2021-08-10T22:33:07.454085484Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8370bc0b5b9a970d4007204b3b67a7266d452619c4263328f4a8403fa748ea90,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-20210810223223-30291,Uid:18b06801ebf2048d768b73e098da8a40,Namespace:kube-system,Attemp
t:0,},State:SANDBOX_READY,CreatedAt:1628634789502775214,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18b06801ebf2048d768b73e098da8a40,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 18b06801ebf2048d768b73e098da8a40,kubernetes.io/config.seen: 2021-08-10T22:33:07.454089904Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:76613b5ae865f77eb677f7fc6c1d96c89c385010f19742b2a2daaf3b3d536da2,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-20210810223223-30291,Uid:77761625d867cf54e5130d9def04b55c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1628634789482979257,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-20210810223223-30291,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 77761625d867cf54e5130d9def04b55c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 77761625d867cf54e5130d9def04b55c,kubernetes.io/config.seen: 2021-08-10T22:33:07.454087868Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=b85e4c1d-f0c4-4ffb-80c8-9961135cdbbf name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Aug 10 22:38:44 multinode-20210810223223-30291 crio[2074]: time="2021-08-10 22:38:44.379345905Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8415c855-b319-4c4e-b061-13c8423aba58 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:38:44 multinode-20210810223223-30291 crio[2074]: time="2021-08-10 22:38:44.379395773Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8415c855-b319-4c4e-b061-13c8423aba58 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:38:44 multinode-20210810223223-30291 crio[2074]: time="2021-08-10 22:38:44.379639273Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0546f83cfad21f66789e8c6c45beb48deb7e73bfb0f09ffb76be32add922c472,PodSandboxId:77754eaffbc8999d0eee55167b99849548aeceee68e42223821dde6df9b71f5f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,State:CONTAINER_RUNNING,CreatedAt:1628634877804702101,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-84b6686758-5h7gq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d11e1a39-be6d-4d16-9086-b6cfef5e1644,},Annotations:map[string]string{io.kubernetes.container.hash: c6d8bc46,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f90af0c728851e601b452c99af417f5bfcd580b5355c6e8bb56fa2c701aacb7,PodSandboxId:f2a2f0197075a11d02804fe88085db4f127993180baa17934f4f244fbc5c7a8c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1,State:CONTAINER_RUNNING,CreatedAt:1628634817927932495,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2bvdc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c26b9021-1d86-475c-ac98-6f7e7e07c434,},Annotations:map[string]string{io.kubernetes.container.hash: 4970c570,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfebebba90bf30cb1d511f33375a3e9b6bd91ddc07cabfe5b5589a14fe35685c,PodSandboxId:552c60b72eff7ec71d413674f557175d18290855b2fbb557210d2ac15208a685,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628634817667541294,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af946d1d-fa19-47fa-8c83-fd1d06a0e788,},Annotations:map[string]string{io.kubernetes.container.hash: 31e4f950,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea6ca1aa48dc2cf026c5ed8947bfb8ef90ac7ac3d0f88bee1bc2b22511e83b4,PodSandboxId:fbff0e39503c70d012b363b8fe670ce8a9f3577e7e15e998600a2fa8b6670222,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628634816892860051,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-v7x6p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c4eb44b-9d97-4934-aa16-8b8625bf04cf,},Annotations:map[string]string{io.kubernetes.container.hash: e2b5faf8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dn
s-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:450a88f25c78b26558e59b4e708a2af565fa95ffdc04a9cb0e91b46224c6573d,PodSandboxId:2b47855215e9f9a81ed9a31f5e73478cd23463ae8391753d3b15909daf03e2bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628634815170944830,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lmhw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a10306d-93c9-4aac-b47a-8bd1d
406882c,},Annotations:map[string]string{io.kubernetes.container.hash: a3c465e1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b071c54f171f451e6e33cdbfbddab63c53133700bfe5dbff374f9d207f00facf,PodSandboxId:8370bc0b5b9a970d4007204b3b67a7266d452619c4263328f4a8403fa748ea90,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628634790862838159,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18b06801ebf2048d768b73e098da
8a40,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d8b11c78d38735ea0d95a1ef9a0812af3d2ecb679a807514125823d90f0f456,PodSandboxId:76613b5ae865f77eb677f7fc6c1d96c89c385010f19742b2a2daaf3b3d536da2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628634790776364691,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 77761625d867cf54e5130d9def04b55c,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b22cef1088cdfa5b9c5d2f6974fa60c0edf2132c0881c9c61984c16c2864f97,PodSandboxId:2d4522aed7a375e49e9a8b17d6da385d86a65fcbac37cb6f75c688d6842598cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628634790563156773,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
099813bef5425d688516ac434247f4d,},Annotations:map[string]string{io.kubernetes.container.hash: 96b9c909,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d359668a0c2391fdfb2cd9dcf67ffaf1c30ce8549c23d047c95ef4a75f2ada,PodSandboxId:42a0fc97ea8eb49918d4f19e7d3b3837f13d12aa3c1b7f8c191d955c9ae417e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628634790483989854,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee4e4232c1192224bf90edfa1030cde5,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 755f249c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8415c855-b319-4c4e-b061-13c8423aba58 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:38:44 multinode-20210810223223-30291 crio[2074]: time="2021-08-10 22:38:44.389166655Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a88efec0-b4bd-4612-802e-07d85389a5ba name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:38:44 multinode-20210810223223-30291 crio[2074]: time="2021-08-10 22:38:44.389290496Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a88efec0-b4bd-4612-802e-07d85389a5ba name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:38:44 multinode-20210810223223-30291 crio[2074]: time="2021-08-10 22:38:44.389544640Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0546f83cfad21f66789e8c6c45beb48deb7e73bfb0f09ffb76be32add922c472,PodSandboxId:77754eaffbc8999d0eee55167b99849548aeceee68e42223821dde6df9b71f5f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,State:CONTAINER_RUNNING,CreatedAt:1628634877804702101,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-84b6686758-5h7gq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d11e1a39-be6d-4d16-9086-b6cfef5e1644,},Annotations:map[string]string{io.kubernetes.container.hash: c6d8bc46,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f90af0c728851e601b452c99af417f5bfcd580b5355c6e8bb56fa2c701aacb7,PodSandboxId:f2a2f0197075a11d02804fe88085db4f127993180baa17934f4f244fbc5c7a8c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1,State:CONTAINER_RUNNING,CreatedAt:1628634817927932495,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2bvdc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c26b9021-1d86-475c-ac98-6f7e7e07c434,},Annotations:map[string]string{io.kubernetes.container.hash: 4970c570,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfebebba90bf30cb1d511f33375a3e9b6bd91ddc07cabfe5b5589a14fe35685c,PodSandboxId:552c60b72eff7ec71d413674f557175d18290855b2fbb557210d2ac15208a685,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628634817667541294,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af946d1d-fa19-47fa-8c83-fd1d06a0e788,},Annotations:map[string]string{io.kubernetes.container.hash: 31e4f950,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea6ca1aa48dc2cf026c5ed8947bfb8ef90ac7ac3d0f88bee1bc2b22511e83b4,PodSandboxId:fbff0e39503c70d012b363b8fe670ce8a9f3577e7e15e998600a2fa8b6670222,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628634816892860051,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-v7x6p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c4eb44b-9d97-4934-aa16-8b8625bf04cf,},Annotations:map[string]string{io.kubernetes.container.hash: e2b5faf8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dn
s-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:450a88f25c78b26558e59b4e708a2af565fa95ffdc04a9cb0e91b46224c6573d,PodSandboxId:2b47855215e9f9a81ed9a31f5e73478cd23463ae8391753d3b15909daf03e2bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628634815170944830,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lmhw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a10306d-93c9-4aac-b47a-8bd1d
406882c,},Annotations:map[string]string{io.kubernetes.container.hash: a3c465e1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b071c54f171f451e6e33cdbfbddab63c53133700bfe5dbff374f9d207f00facf,PodSandboxId:8370bc0b5b9a970d4007204b3b67a7266d452619c4263328f4a8403fa748ea90,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628634790862838159,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18b06801ebf2048d768b73e098da
8a40,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d8b11c78d38735ea0d95a1ef9a0812af3d2ecb679a807514125823d90f0f456,PodSandboxId:76613b5ae865f77eb677f7fc6c1d96c89c385010f19742b2a2daaf3b3d536da2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628634790776364691,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 77761625d867cf54e5130d9def04b55c,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b22cef1088cdfa5b9c5d2f6974fa60c0edf2132c0881c9c61984c16c2864f97,PodSandboxId:2d4522aed7a375e49e9a8b17d6da385d86a65fcbac37cb6f75c688d6842598cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628634790563156773,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
099813bef5425d688516ac434247f4d,},Annotations:map[string]string{io.kubernetes.container.hash: 96b9c909,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d359668a0c2391fdfb2cd9dcf67ffaf1c30ce8549c23d047c95ef4a75f2ada,PodSandboxId:42a0fc97ea8eb49918d4f19e7d3b3837f13d12aa3c1b7f8c191d955c9ae417e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628634790483989854,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee4e4232c1192224bf90edfa1030cde5,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 755f249c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a88efec0-b4bd-4612-802e-07d85389a5ba name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:38:44 multinode-20210810223223-30291 crio[2074]: time="2021-08-10 22:38:44.420135478Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2b1719eb-85a7-440b-a856-ce4310b4caaa name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:38:44 multinode-20210810223223-30291 crio[2074]: time="2021-08-10 22:38:44.420271117Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2b1719eb-85a7-440b-a856-ce4310b4caaa name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:38:44 multinode-20210810223223-30291 crio[2074]: time="2021-08-10 22:38:44.420444629Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0546f83cfad21f66789e8c6c45beb48deb7e73bfb0f09ffb76be32add922c472,PodSandboxId:77754eaffbc8999d0eee55167b99849548aeceee68e42223821dde6df9b71f5f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,State:CONTAINER_RUNNING,CreatedAt:1628634877804702101,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-84b6686758-5h7gq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d11e1a39-be6d-4d16-9086-b6cfef5e1644,},Annotations:map[string]string{io.kubernetes.container.hash: c6d8bc46,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f90af0c728851e601b452c99af417f5bfcd580b5355c6e8bb56fa2c701aacb7,PodSandboxId:f2a2f0197075a11d02804fe88085db4f127993180baa17934f4f244fbc5c7a8c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1,State:CONTAINER_RUNNING,CreatedAt:1628634817927932495,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2bvdc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c26b9021-1d86-475c-ac98-6f7e7e07c434,},Annotations:map[string]string{io.kubernetes.container.hash: 4970c570,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfebebba90bf30cb1d511f33375a3e9b6bd91ddc07cabfe5b5589a14fe35685c,PodSandboxId:552c60b72eff7ec71d413674f557175d18290855b2fbb557210d2ac15208a685,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628634817667541294,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af946d1d-fa19-47fa-8c83-fd1d06a0e788,},Annotations:map[string]string{io.kubernetes.container.hash: 31e4f950,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea6ca1aa48dc2cf026c5ed8947bfb8ef90ac7ac3d0f88bee1bc2b22511e83b4,PodSandboxId:fbff0e39503c70d012b363b8fe670ce8a9f3577e7e15e998600a2fa8b6670222,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628634816892860051,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-v7x6p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c4eb44b-9d97-4934-aa16-8b8625bf04cf,},Annotations:map[string]string{io.kubernetes.container.hash: e2b5faf8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dn
s-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:450a88f25c78b26558e59b4e708a2af565fa95ffdc04a9cb0e91b46224c6573d,PodSandboxId:2b47855215e9f9a81ed9a31f5e73478cd23463ae8391753d3b15909daf03e2bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628634815170944830,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lmhw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a10306d-93c9-4aac-b47a-8bd1d
406882c,},Annotations:map[string]string{io.kubernetes.container.hash: a3c465e1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b071c54f171f451e6e33cdbfbddab63c53133700bfe5dbff374f9d207f00facf,PodSandboxId:8370bc0b5b9a970d4007204b3b67a7266d452619c4263328f4a8403fa748ea90,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628634790862838159,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18b06801ebf2048d768b73e098da
8a40,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d8b11c78d38735ea0d95a1ef9a0812af3d2ecb679a807514125823d90f0f456,PodSandboxId:76613b5ae865f77eb677f7fc6c1d96c89c385010f19742b2a2daaf3b3d536da2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628634790776364691,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 77761625d867cf54e5130d9def04b55c,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b22cef1088cdfa5b9c5d2f6974fa60c0edf2132c0881c9c61984c16c2864f97,PodSandboxId:2d4522aed7a375e49e9a8b17d6da385d86a65fcbac37cb6f75c688d6842598cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628634790563156773,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
099813bef5425d688516ac434247f4d,},Annotations:map[string]string{io.kubernetes.container.hash: 96b9c909,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d359668a0c2391fdfb2cd9dcf67ffaf1c30ce8549c23d047c95ef4a75f2ada,PodSandboxId:42a0fc97ea8eb49918d4f19e7d3b3837f13d12aa3c1b7f8c191d955c9ae417e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628634790483989854,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee4e4232c1192224bf90edfa1030cde5,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 755f249c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2b1719eb-85a7-440b-a856-ce4310b4caaa name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:38:44 multinode-20210810223223-30291 crio[2074]: time="2021-08-10 22:38:44.464140178Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=037d5a0d-9bf9-486c-bbd5-5d7369f724f2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:38:44 multinode-20210810223223-30291 crio[2074]: time="2021-08-10 22:38:44.464282314Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=037d5a0d-9bf9-486c-bbd5-5d7369f724f2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:38:44 multinode-20210810223223-30291 crio[2074]: time="2021-08-10 22:38:44.464467300Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0546f83cfad21f66789e8c6c45beb48deb7e73bfb0f09ffb76be32add922c472,PodSandboxId:77754eaffbc8999d0eee55167b99849548aeceee68e42223821dde6df9b71f5f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,State:CONTAINER_RUNNING,CreatedAt:1628634877804702101,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-84b6686758-5h7gq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d11e1a39-be6d-4d16-9086-b6cfef5e1644,},Annotations:map[string]string{io.kubernetes.container.hash: c6d8bc46,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f90af0c728851e601b452c99af417f5bfcd580b5355c6e8bb56fa2c701aacb7,PodSandboxId:f2a2f0197075a11d02804fe88085db4f127993180baa17934f4f244fbc5c7a8c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1,State:CONTAINER_RUNNING,CreatedAt:1628634817927932495,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2bvdc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c26b9021-1d86-475c-ac98-6f7e7e07c434,},Annotations:map[string]string{io.kubernetes.container.hash: 4970c570,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfebebba90bf30cb1d511f33375a3e9b6bd91ddc07cabfe5b5589a14fe35685c,PodSandboxId:552c60b72eff7ec71d413674f557175d18290855b2fbb557210d2ac15208a685,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628634817667541294,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af946d1d-fa19-47fa-8c83-fd1d06a0e788,},Annotations:map[string]string{io.kubernetes.container.hash: 31e4f950,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ea6ca1aa48dc2cf026c5ed8947bfb8ef90ac7ac3d0f88bee1bc2b22511e83b4,PodSandboxId:fbff0e39503c70d012b363b8fe670ce8a9f3577e7e15e998600a2fa8b6670222,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628634816892860051,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-v7x6p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c4eb44b-9d97-4934-aa16-8b8625bf04cf,},Annotations:map[string]string{io.kubernetes.container.hash: e2b5faf8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dn
s-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:450a88f25c78b26558e59b4e708a2af565fa95ffdc04a9cb0e91b46224c6573d,PodSandboxId:2b47855215e9f9a81ed9a31f5e73478cd23463ae8391753d3b15909daf03e2bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628634815170944830,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lmhw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a10306d-93c9-4aac-b47a-8bd1d
406882c,},Annotations:map[string]string{io.kubernetes.container.hash: a3c465e1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b071c54f171f451e6e33cdbfbddab63c53133700bfe5dbff374f9d207f00facf,PodSandboxId:8370bc0b5b9a970d4007204b3b67a7266d452619c4263328f4a8403fa748ea90,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628634790862838159,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18b06801ebf2048d768b73e098da
8a40,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d8b11c78d38735ea0d95a1ef9a0812af3d2ecb679a807514125823d90f0f456,PodSandboxId:76613b5ae865f77eb677f7fc6c1d96c89c385010f19742b2a2daaf3b3d536da2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628634790776364691,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 77761625d867cf54e5130d9def04b55c,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b22cef1088cdfa5b9c5d2f6974fa60c0edf2132c0881c9c61984c16c2864f97,PodSandboxId:2d4522aed7a375e49e9a8b17d6da385d86a65fcbac37cb6f75c688d6842598cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628634790563156773,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
099813bef5425d688516ac434247f4d,},Annotations:map[string]string{io.kubernetes.container.hash: 96b9c909,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d359668a0c2391fdfb2cd9dcf67ffaf1c30ce8549c23d047c95ef4a75f2ada,PodSandboxId:42a0fc97ea8eb49918d4f19e7d3b3837f13d12aa3c1b7f8c191d955c9ae417e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628634790483989854,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-20210810223223-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee4e4232c1192224bf90edfa1030cde5,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 755f249c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=037d5a0d-9bf9-486c-bbd5-5d7369f724f2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                               CREATED             STATE               NAME                      ATTEMPT             POD ID
	0546f83cfad21       docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47   4 minutes ago       Running             busybox                   0                   77754eaffbc89
	2f90af0c72885       6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb                                    5 minutes ago       Running             kindnet-cni               0                   f2a2f0197075a
	bfebebba90bf3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                    5 minutes ago       Running             storage-provisioner       0                   552c60b72eff7
	0ea6ca1aa48dc       296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899                                    5 minutes ago       Running             coredns                   0                   fbff0e39503c7
	450a88f25c78b       adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92                                    5 minutes ago       Running             kube-proxy                0                   2b47855215e9f
	b071c54f171f4       6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a                                    5 minutes ago       Running             kube-scheduler            0                   8370bc0b5b9a9
	9d8b11c78d387       bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9                                    5 minutes ago       Running             kube-controller-manager   0                   76613b5ae865f
	3b22cef1088cd       3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80                                    5 minutes ago       Running             kube-apiserver            0                   2d4522aed7a37
	a9d359668a0c2       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934                                    5 minutes ago       Running             etcd                      0                   42a0fc97ea8eb
	
	* 
	* ==> coredns [0ea6ca1aa48dc2cf026c5ed8947bfb8ef90ac7ac3d0f88bee1bc2b22511e83b4] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = 7ae91e86dd75dee9ae501cb58003198b
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-20210810223223-30291
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-20210810223223-30291
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=877a5691753f15214a0c269ac69dcdc5a4d99fcd
	                    minikube.k8s.io/name=multinode-20210810223223-30291
	                    minikube.k8s.io/updated_at=2021_08_10T22_33_20_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Aug 2021 22:33:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-20210810223223-30291
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Aug 2021 22:38:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Aug 2021 22:34:55 +0000   Tue, 10 Aug 2021 22:33:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Aug 2021 22:34:55 +0000   Tue, 10 Aug 2021 22:33:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Aug 2021 22:34:55 +0000   Tue, 10 Aug 2021 22:33:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Aug 2021 22:34:55 +0000   Tue, 10 Aug 2021 22:33:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.32
	  Hostname:    multinode-20210810223223-30291
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2186320Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2186320Ki
	  pods:               110
	System Info:
	  Machine ID:                 de21d03631a8455fad3ba3176e019295
	  System UUID:                de21d036-31a8-455f-ad3b-a3176e019295
	  Boot ID:                    bec9016f-f72c-4c6f-b82e-0ecc285f4ce2
	  Kernel Version:             4.19.182
	  OS Image:                   Buildroot 2020.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.20.2
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                      ------------  ----------  ---------------  -------------  ---
	  default                     busybox-84b6686758-5h7gq                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m13s
	  kube-system                 coredns-558bd4d5db-v7x6p                                  100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (7%!)(MISSING)     5m11s
	  kube-system                 etcd-multinode-20210810223223-30291                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m19s
	  kube-system                 kindnet-2bvdc                                             100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m11s
	  kube-system                 kube-apiserver-multinode-20210810223223-30291             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m27s
	  kube-system                 kube-controller-manager-multinode-20210810223223-30291    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m19s
	  kube-system                 kube-proxy-lmhw9                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m11s
	  kube-system                 kube-scheduler-multinode-20210810223223-30291             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m19s
	  kube-system                 storage-provisioner                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From        Message
	  ----    ------                   ----   ----        -------
	  Normal  Starting                 5m19s  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m19s  kubelet     Node multinode-20210810223223-30291 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m19s  kubelet     Node multinode-20210810223223-30291 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m19s  kubelet     Node multinode-20210810223223-30291 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m19s  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m11s  kubelet     Node multinode-20210810223223-30291 status is now: NodeReady
	  Normal  Starting                 5m9s   kube-proxy  Starting kube-proxy.
	
	
	Name:               multinode-20210810223223-30291-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-20210810223223-30291-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Aug 2021 22:34:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-20210810223223-30291-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Aug 2021 22:38:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Aug 2021 22:34:49 +0000   Tue, 10 Aug 2021 22:34:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Aug 2021 22:34:49 +0000   Tue, 10 Aug 2021 22:34:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Aug 2021 22:34:49 +0000   Tue, 10 Aug 2021 22:34:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Aug 2021 22:34:49 +0000   Tue, 10 Aug 2021 22:34:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.251
	  Hostname:    multinode-20210810223223-30291-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2186496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2186496Ki
	  pods:               110
	System Info:
	  Machine ID:                 005771fefc2349659960c7a87d3f4dae
	  System UUID:                005771fe-fc23-4965-9960-c7a87d3f4dae
	  Boot ID:                    1c43cbb6-d195-42d9-894f-bf1b95ff036b
	  Kernel Version:             4.19.182
	  OS Image:                   Buildroot 2020.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.20.2
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-84b6686758-nfzzk    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m13s
	  kube-system                 kindnet-frf82               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m25s
	  kube-system                 kube-proxy-6t6mb            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 4m25s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m25s (x2 over 4m25s)  kubelet     Node multinode-20210810223223-30291-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m25s (x2 over 4m25s)  kubelet     Node multinode-20210810223223-30291-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m25s (x2 over 4m25s)  kubelet     Node multinode-20210810223223-30291-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m25s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 4m22s                  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                4m15s                  kubelet     Node multinode-20210810223223-30291-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Aug10 22:32] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.088919] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +3.694860] Unstable clock detected, switching default tracing clock to "global"
	              If you want to keep using the local clock, then add:
	                "trace_clock=local"
	              on the kernel command line
	[  +0.000017] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.251584] systemd-fstab-generator[1161]: Ignoring "noauto" for root device
	[  +0.038227] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +0.961352] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1727 comm=systemd-network
	[  +1.254958] vboxguest: loading out-of-tree module taints kernel.
	[  +0.006174] vboxguest: PCI device not found, probably running on physical hardware.
	[  +1.709643] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[ +14.578437] systemd-fstab-generator[2161]: Ignoring "noauto" for root device
	[  +0.147253] systemd-fstab-generator[2174]: Ignoring "noauto" for root device
	[  +0.197594] systemd-fstab-generator[2201]: Ignoring "noauto" for root device
	[Aug10 22:33] systemd-fstab-generator[2404]: Ignoring "noauto" for root device
	[ +17.433952] systemd-fstab-generator[2814]: Ignoring "noauto" for root device
	[ +16.294097] kauditd_printk_skb: 38 callbacks suppressed
	[Aug10 22:34] NFSD: Unable to end grace period: -110
	
	* 
	* ==> etcd [a9d359668a0c2391fdfb2cd9dcf67ffaf1c30ce8549c23d047c95ef4a75f2ada] <==
	* 2021-08-10 22:34:42.464793 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:34:52.464706 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:35:02.465202 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:35:12.464211 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:35:22.464958 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:35:32.465006 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:35:42.465044 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:35:52.465381 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:36:02.464159 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:36:12.464138 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:36:22.464574 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:36:32.464335 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:36:42.464340 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:36:52.464228 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:37:02.463816 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:37:12.464274 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:37:22.466118 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:37:32.464849 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:37:42.465104 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:37:52.464222 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:38:02.464141 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:38:12.464882 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:38:22.465808 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:38:32.464908 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-10 22:38:42.464904 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  22:38:44 up 6 min,  0 users,  load average: 0.20, 0.39, 0.22
	Linux multinode-20210810223223-30291 4.19.182 #1 SMP Fri Aug 6 09:11:32 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [3b22cef1088cdfa5b9c5d2f6974fa60c0edf2132c0881c9c61984c16c2864f97] <==
	* I0810 22:34:28.512199       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0810 22:34:28.512250       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0810 22:34:59.758132       1 client.go:360] parsed scheme: "passthrough"
	I0810 22:34:59.758266       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0810 22:34:59.758290       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0810 22:35:40.221354       1 client.go:360] parsed scheme: "passthrough"
	I0810 22:35:40.221745       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0810 22:35:40.221798       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0810 22:36:13.885333       1 client.go:360] parsed scheme: "passthrough"
	I0810 22:36:13.885714       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0810 22:36:13.885756       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0810 22:36:49.812970       1 client.go:360] parsed scheme: "passthrough"
	I0810 22:36:49.813006       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0810 22:36:49.813017       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0810 22:37:25.388022       1 client.go:360] parsed scheme: "passthrough"
	I0810 22:37:25.388108       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0810 22:37:25.388135       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	E0810 22:37:42.768012       1 upgradeaware.go:387] Error proxying data from client to backend: write tcp 192.168.50.32:37876->192.168.50.32:10250: write: connection reset by peer
	I0810 22:38:00.037600       1 client.go:360] parsed scheme: "passthrough"
	I0810 22:38:00.037783       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0810 22:38:00.037798       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0810 22:38:36.872898       1 client.go:360] parsed scheme: "passthrough"
	I0810 22:38:36.873022       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0810 22:38:36.873034       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	E0810 22:38:43.556182       1 upgradeaware.go:387] Error proxying data from client to backend: read tcp 192.168.50.32:8443->192.168.50.1:38988: read: connection reset by peer
	
	* 
	* ==> kube-controller-manager [9d8b11c78d38735ea0d95a1ef9a0812af3d2ecb679a807514125823d90f0f456] <==
	* I0810 22:33:33.780317       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-558bd4d5db to 2"
	I0810 22:33:33.807397       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0810 22:33:33.844874       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-lmhw9"
	I0810 22:33:33.852188       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-2bvdc"
	E0810 22:33:33.959456       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"3eb0224f-214a-4d5e-ba63-b7b722448d21", ResourceVersion:"270", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764231599, loc:(*time.Location)(0x72ff440)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc00174af48), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00174af60)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.
LabelSelector)(0xc0005e5f80), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Gl
usterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc00183ee00), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00174a
f78), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeS
ource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00174af90), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil),
AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.21.3", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0005e5fc0)}}, Resources:v1.R
esourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001110cc0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPo
licy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00195fa78), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0004844d0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runtime
ClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0002b8d10)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00195fac8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	E0810 22:33:34.033293       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"a503f3df-695b-4295-a3fb-be1b75ae37c5", ResourceVersion:"419", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764231600, loc:(*time.Location)(0x72ff440)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"kindest/kindnetd:v20210326-1e038dc5\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists
\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0021b2f90), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0021b2fa8)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0021b2fc0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0021b2fd8)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0021a1000), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, Creat
ionTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0021b2ff0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexV
olumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0021b3008), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVol
umeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSI
VolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0021b3020), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v
1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"kindest/kindnetd:v20210326-1e038dc5", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0021a1020)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0021a1060)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amoun
t{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropag
ation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc00218ff20), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0021b15b8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000aaefc0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil
), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0021c59c0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0021b1600)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition
(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0810 22:33:34.038041       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-v7x6p"
	I0810 22:33:34.059269       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-gjhgp"
	E0810 22:33:34.077923       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"3eb0224f-214a-4d5e-ba63-b7b722448d21", ResourceVersion:"418", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764231599, loc:(*time.Location)(0x72ff440)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0021b2de0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0021b2df8)}, v1.ManagedFieldsEntry{Manager:"kube-co
ntroller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0021b2e10), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0021b2e28)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0021a0ee0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElastic
BlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0021cad00), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSour
ce)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0021b2e40), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSo
urce)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0021b2e58), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil),
Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.21.3", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil),
WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0021a0f20)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"F
ile", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc00218fda0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0021b1248), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000aaee00), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)
(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0021c5810)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0021b1298)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest ve
rsion and try again
	I0810 22:33:34.139230       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-gjhgp"
	W0810 22:34:19.160323       1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-20210810223223-30291-m02" does not exist
	I0810 22:34:19.213872       1 range_allocator.go:373] Set node multinode-20210810223223-30291-m02 PodCIDR to [10.244.1.0/24]
	I0810 22:34:19.229592       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-6t6mb"
	I0810 22:34:19.231191       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-frf82"
	E0810 22:34:19.348447       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"3eb0224f-214a-4d5e-ba63-b7b722448d21", ResourceVersion:"546", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764231599, loc:(*time.Location)(0x72ff440)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001fa7680), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001fa7698)}, v1.ManagedFieldsEntry{Manager:"kube-co
ntroller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001fa76b0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001fa76f8)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0015ce980), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElastic
BlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc001f1a3c0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSour
ce)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001fa7710), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSo
urce)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001fa7728), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil),
Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.21.3", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil),
WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0015ceb00)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"F
ile", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001e67800), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001f26338), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000239570), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)
(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0020e6730)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001f26388)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:1, NumberMisscheduled:0, DesiredNumberScheduled:2, NumberReady:1, ObservedGeneration:1, UpdatedNumberScheduled:1, NumberAvailable:1, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest ve
rsion and try again
	W0810 22:34:23.229022       1 node_lifecycle_controller.go:1013] Missing timestamp for Node multinode-20210810223223-30291-m02. Assuming now as a timestamp.
	I0810 22:34:23.229242       1 event.go:291] "Event occurred" object="multinode-20210810223223-30291-m02" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-20210810223223-30291-m02 event: Registered Node multinode-20210810223223-30291-m02 in Controller"
	I0810 22:34:31.756639       1 event.go:291] "Event occurred" object="default/busybox" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-84b6686758 to 2"
	I0810 22:34:31.775807       1 event.go:291] "Event occurred" object="default/busybox-84b6686758" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-84b6686758-nfzzk"
	I0810 22:34:31.786266       1 event.go:291] "Event occurred" object="default/busybox-84b6686758" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-84b6686758-5h7gq"
	I0810 22:34:33.244427       1 event.go:291] "Event occurred" object="default/busybox-84b6686758-nfzzk" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-84b6686758-nfzzk"
	
	* 
	* ==> kube-proxy [450a88f25c78b26558e59b4e708a2af565fa95ffdc04a9cb0e91b46224c6573d] <==
	* I0810 22:33:35.569628       1 node.go:172] Successfully retrieved node IP: 192.168.50.32
	I0810 22:33:35.569813       1 server_others.go:140] Detected node IP 192.168.50.32
	W0810 22:33:35.569841       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	W0810 22:33:35.654806       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0810 22:33:35.654827       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0810 22:33:35.654841       1 server_others.go:212] Using iptables Proxier.
	I0810 22:33:35.655734       1 server.go:643] Version: v1.21.3
	I0810 22:33:35.658259       1 config.go:315] Starting service config controller
	I0810 22:33:35.658621       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0810 22:33:35.658673       1 config.go:224] Starting endpoint slice config controller
	I0810 22:33:35.658678       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0810 22:33:35.676217       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0810 22:33:35.685273       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0810 22:33:35.759211       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0810 22:33:35.759304       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [b071c54f171f451e6e33cdbfbddab63c53133700bfe5dbff374f9d207f00facf] <==
	* E0810 22:33:15.981752       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0810 22:33:15.981856       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0810 22:33:15.981938       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0810 22:33:15.982018       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0810 22:33:15.982342       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0810 22:33:15.982446       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0810 22:33:15.983840       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0810 22:33:15.987768       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0810 22:33:15.988341       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0810 22:33:15.988857       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0810 22:33:16.802264       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0810 22:33:16.819246       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0810 22:33:16.973719       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0810 22:33:17.040697       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0810 22:33:17.045349       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0810 22:33:17.064578       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0810 22:33:17.193884       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0810 22:33:17.273017       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0810 22:33:17.289233       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0810 22:33:17.372978       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0810 22:33:17.401904       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0810 22:33:17.428102       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0810 22:33:17.471670       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0810 22:33:17.563926       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0810 22:33:19.977395       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2021-08-10 22:32:34 UTC, end at Tue 2021-08-10 22:38:45 UTC. --
	Aug 10 22:33:33 multinode-20210810223223-30291 kubelet[2823]: I0810 22:33:33.910283    2823 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2a10306d-93c9-4aac-b47a-8bd1d406882c-kube-proxy\") pod \"kube-proxy-lmhw9\" (UID: \"2a10306d-93c9-4aac-b47a-8bd1d406882c\") "
	Aug 10 22:33:33 multinode-20210810223223-30291 kubelet[2823]: I0810 22:33:33.910302    2823 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2a10306d-93c9-4aac-b47a-8bd1d406882c-lib-modules\") pod \"kube-proxy-lmhw9\" (UID: \"2a10306d-93c9-4aac-b47a-8bd1d406882c\") "
	Aug 10 22:33:33 multinode-20210810223223-30291 kubelet[2823]: I0810 22:33:33.910321    2823 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c26b9021-1d86-475c-ac98-6f7e7e07c434-cni-cfg\") pod \"kindnet-2bvdc\" (UID: \"c26b9021-1d86-475c-ac98-6f7e7e07c434\") "
	Aug 10 22:33:33 multinode-20210810223223-30291 kubelet[2823]: I0810 22:33:33.910339    2823 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c26b9021-1d86-475c-ac98-6f7e7e07c434-lib-modules\") pod \"kindnet-2bvdc\" (UID: \"c26b9021-1d86-475c-ac98-6f7e7e07c434\") "
	Aug 10 22:33:33 multinode-20210810223223-30291 kubelet[2823]: I0810 22:33:33.910418    2823 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctbl8\" (UniqueName: \"kubernetes.io/projected/2a10306d-93c9-4aac-b47a-8bd1d406882c-kube-api-access-ctbl8\") pod \"kube-proxy-lmhw9\" (UID: \"2a10306d-93c9-4aac-b47a-8bd1d406882c\") "
	Aug 10 22:33:34 multinode-20210810223223-30291 kubelet[2823]: I0810 22:33:34.035403    2823 topology_manager.go:187] "Topology Admit Handler"
	Aug 10 22:33:34 multinode-20210810223223-30291 kubelet[2823]: E0810 22:33:34.041387    2823 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:multinode-20210810223223-30291" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'multinode-20210810223223-30291' and this object
	Aug 10 22:33:34 multinode-20210810223223-30291 kubelet[2823]: I0810 22:33:34.092432    2823 topology_manager.go:187] "Topology Admit Handler"
	Aug 10 22:33:34 multinode-20210810223223-30291 kubelet[2823]: I0810 22:33:34.117762    2823 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c4eb44b-9d97-4934-aa16-8b8625bf04cf-config-volume\") pod \"coredns-558bd4d5db-v7x6p\" (UID: \"0c4eb44b-9d97-4934-aa16-8b8625bf04cf\") "
	Aug 10 22:33:34 multinode-20210810223223-30291 kubelet[2823]: I0810 22:33:34.117795    2823 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnwhk\" (UniqueName: \"kubernetes.io/projected/0c4eb44b-9d97-4934-aa16-8b8625bf04cf-kube-api-access-vnwhk\") pod \"coredns-558bd4d5db-v7x6p\" (UID: \"0c4eb44b-9d97-4934-aa16-8b8625bf04cf\") "
	Aug 10 22:33:35 multinode-20210810223223-30291 kubelet[2823]: E0810 22:33:35.219307    2823 configmap.go:200] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	Aug 10 22:33:35 multinode-20210810223223-30291 kubelet[2823]: E0810 22:33:35.219475    2823 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/configmap/0c4eb44b-9d97-4934-aa16-8b8625bf04cf-config-volume podName:0c4eb44b-9d97-4934-aa16-8b8625bf04cf nodeName:}" failed. No retries permitted until 2021-08-10 22:33:35.719411618 +0000 UTC m=+16.076751012 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c4eb44b-9d97-4934-aa16-8b8625bf04cf-config-volume\") pod \"coredns-558bd4d5db-v7x6p\" (UID: \"0c4eb44b-9d97-4934-aa16-8b8625bf04cf\") : failed to sync configmap cache: timed out waiting for the condition"
	Aug 10 22:33:35 multinode-20210810223223-30291 kubelet[2823]: E0810 22:33:35.757470    2823 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/c26b9021-1d86-475c-ac98-6f7e7e07c434/etc-hosts with error exit status 1" pod="kube-system/kindnet-2bvdc"
	Aug 10 22:33:36 multinode-20210810223223-30291 kubelet[2823]: I0810 22:33:36.209114    2823 topology_manager.go:187] "Topology Admit Handler"
	Aug 10 22:33:36 multinode-20210810223223-30291 kubelet[2823]: I0810 22:33:36.257659    2823 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7dtx\" (UniqueName: \"kubernetes.io/projected/af946d1d-fa19-47fa-8c83-fd1d06a0e788-kube-api-access-t7dtx\") pod \"storage-provisioner\" (UID: \"af946d1d-fa19-47fa-8c83-fd1d06a0e788\") "
	Aug 10 22:33:36 multinode-20210810223223-30291 kubelet[2823]: I0810 22:33:36.257794    2823 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/af946d1d-fa19-47fa-8c83-fd1d06a0e788-tmp\") pod \"storage-provisioner\" (UID: \"af946d1d-fa19-47fa-8c83-fd1d06a0e788\") "
	Aug 10 22:34:31 multinode-20210810223223-30291 kubelet[2823]: I0810 22:34:31.845176    2823 topology_manager.go:187] "Topology Admit Handler"
	Aug 10 22:34:31 multinode-20210810223223-30291 kubelet[2823]: E0810 22:34:31.851799    2823 reflector.go:138] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:multinode-20210810223223-30291" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'multinode-20210810223223-30291' and this object
	Aug 10 22:34:32 multinode-20210810223223-30291 kubelet[2823]: I0810 22:34:32.036112    2823 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb4qr\" (UniqueName: \"kubernetes.io/projected/d11e1a39-be6d-4d16-9086-b6cfef5e1644-kube-api-access-gb4qr\") pod \"busybox-84b6686758-5h7gq\" (UID: \"d11e1a39-be6d-4d16-9086-b6cfef5e1644\") "
	Aug 10 22:34:33 multinode-20210810223223-30291 kubelet[2823]: E0810 22:34:33.144805    2823 projected.go:293] Couldn't get configMap default/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Aug 10 22:34:33 multinode-20210810223223-30291 kubelet[2823]: E0810 22:34:33.144936    2823 projected.go:199] Error preparing data for projected volume kube-api-access-gb4qr for pod default/busybox-84b6686758-5h7gq: failed to sync configmap cache: timed out waiting for the condition
	Aug 10 22:34:33 multinode-20210810223223-30291 kubelet[2823]: E0810 22:34:33.145073    2823 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/projected/d11e1a39-be6d-4d16-9086-b6cfef5e1644-kube-api-access-gb4qr podName:d11e1a39-be6d-4d16-9086-b6cfef5e1644 nodeName:}" failed. No retries permitted until 2021-08-10 22:34:33.645034745 +0000 UTC m=+74.002374286 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"kube-api-access-gb4qr\" (UniqueName: \"kubernetes.io/projected/d11e1a39-be6d-4d16-9086-b6cfef5e1644-kube-api-access-gb4qr\") pod \"busybox-84b6686758-5h7gq\" (UID: \"d11e1a39-be6d-4d16-9086-b6cfef5e1644\") : failed to sync configmap cache: timed out waiting for the condition"
	Aug 10 22:34:36 multinode-20210810223223-30291 kubelet[2823]: E0810 22:34:36.475104    2823 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/d11e1a39-be6d-4d16-9086-b6cfef5e1644/etc-hosts with error exit status 1" pod="default/busybox-84b6686758-5h7gq"
	Aug 10 22:35:37 multinode-20210810223223-30291 kubelet[2823]: E0810 22:35:37.134148    2823 kubelet.go:1701] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[config-volume kube-api-access-txw9f], unattached volumes=[config-volume kube-api-access-txw9f]: timed out waiting for the condition" pod="kube-system/coredns-558bd4d5db-gjhgp"
	Aug 10 22:35:37 multinode-20210810223223-30291 kubelet[2823]: E0810 22:35:37.134331    2823 pod_workers.go:190] "Error syncing pod, skipping" err="unmounted volumes=[config-volume kube-api-access-txw9f], unattached volumes=[config-volume kube-api-access-txw9f]: timed out waiting for the condition" pod="kube-system/coredns-558bd4d5db-gjhgp" podUID=122395e4-35b4-4693-843a-15fb7d8031f5
	
	* 
	* ==> storage-provisioner [bfebebba90bf30cb1d511f33375a3e9b6bd91ddc07cabfe5b5589a14fe35685c] <==
	* I0810 22:33:37.814409       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0810 22:33:37.835858       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0810 22:33:37.835985       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0810 22:33:37.854998       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0810 22:33:37.855860       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-20210810223223-30291_afa147a1-4085-48f2-b329-57da5a25f4d5!
	I0810 22:33:37.857788       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4af19e78-1255-4880-9e84-6ce0ffca1e58", APIVersion:"v1", ResourceVersion:"484", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-20210810223223-30291_afa147a1-4085-48f2-b329-57da5a25f4d5 became leader
	I0810 22:33:37.959707       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-20210810223223-30291_afa147a1-4085-48f2-b329-57da5a25f4d5!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-20210810223223-30291 -n multinode-20210810223223-30291
helpers_test.go:262: (dbg) Run:  kubectl --context multinode-20210810223223-30291 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:268: non-running pods: 
helpers_test.go:270: ======> post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: describe non-running pods <======
helpers_test.go:273: (dbg) Run:  kubectl --context multinode-20210810223223-30291 describe pod 
helpers_test.go:273: (dbg) Non-zero exit: kubectl --context multinode-20210810223223-30291 describe pod : exit status 1 (50.153199ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:275: kubectl --context multinode-20210810223223-30291 describe pod : exit status 1
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (63.32s)

                                                
                                    
x
+
TestPreload (172.03s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:48: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20210810224820-30291 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.17.0
E0810 22:49:35.791502   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222707-30291/client.crt: no such file or directory
preload_test.go:48: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20210810224820-30291 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.17.0: (2m2.669407169s)
preload_test.go:61: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20210810224820-30291 -- sudo crictl pull busybox
preload_test.go:61: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-20210810224820-30291 -- sudo crictl pull busybox: (3.748468165s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20210810224820-30291 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.17.3
E0810 22:50:58.836321   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222707-30291/client.crt: no such file or directory
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20210810224820-30291 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.17.3: (41.891608143s)
preload_test.go:80: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20210810224820-30291 -- sudo crictl image ls
preload_test.go:85: Expected to find busybox in output of `docker images`, instead got 
-- stdout --
	IMAGE               TAG                 IMAGE ID            SIZE

                                                
                                                
-- /stdout --
panic.go:613: *** TestPreload FAILED at 2021-08-10 22:51:09.044212284 +0000 UTC m=+2032.971691384
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-20210810224820-30291 -n test-preload-20210810224820-30291
helpers_test.go:245: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-20210810224820-30291 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p test-preload-20210810224820-30291 logs -n 25: (1.478319587s)
helpers_test.go:253: TestPreload logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-------------------------------------------------------------|------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                             |              Profile               |  User   | Version |          Start Time           |           End Time            |
	|---------|-------------------------------------------------------------|------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| kubectl | -p                                                          | multinode-20210810223223-30291     | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:37:43 UTC | Tue, 10 Aug 2021 22:38:43 UTC |
	|         | multinode-20210810223223-30291                              |                                    |         |         |                               |                               |
	|         | -- exec                                                     |                                    |         |         |                               |                               |
	|         | busybox-84b6686758-nfzzk                                    |                                    |         |         |                               |                               |
	|         | -- sh -c nslookup                                           |                                    |         |         |                               |                               |
	|         | host.minikube.internal | awk                                |                                    |         |         |                               |                               |
	|         | 'NR==5' | cut -d' ' -f3                                     |                                    |         |         |                               |                               |
	| -p      | multinode-20210810223223-30291                              | multinode-20210810223223-30291     | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:38:43 UTC | Tue, 10 Aug 2021 22:38:45 UTC |
	|         | logs -n 25                                                  |                                    |         |         |                               |                               |
	| node    | add -p                                                      | multinode-20210810223223-30291     | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:38:45 UTC | Tue, 10 Aug 2021 22:39:33 UTC |
	|         | multinode-20210810223223-30291                              |                                    |         |         |                               |                               |
	|         | -v 3 --alsologtostderr                                      |                                    |         |         |                               |                               |
	| profile | list --output json                                          | minikube                           | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:39:33 UTC | Tue, 10 Aug 2021 22:39:34 UTC |
	| -p      | multinode-20210810223223-30291                              | multinode-20210810223223-30291     | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:39:34 UTC | Tue, 10 Aug 2021 22:39:34 UTC |
	|         | cp testdata/cp-test.txt                                     |                                    |         |         |                               |                               |
	|         | /home/docker/cp-test.txt                                    |                                    |         |         |                               |                               |
	| -p      | multinode-20210810223223-30291                              | multinode-20210810223223-30291     | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:39:34 UTC | Tue, 10 Aug 2021 22:39:35 UTC |
	|         | ssh sudo cat                                                |                                    |         |         |                               |                               |
	|         | /home/docker/cp-test.txt                                    |                                    |         |         |                               |                               |
	| -p      | multinode-20210810223223-30291 cp testdata/cp-test.txt      | multinode-20210810223223-30291     | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:39:35 UTC | Tue, 10 Aug 2021 22:39:35 UTC |
	|         | multinode-20210810223223-30291-m02:/home/docker/cp-test.txt |                                    |         |         |                               |                               |
	| -p      | multinode-20210810223223-30291                              | multinode-20210810223223-30291     | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:39:35 UTC | Tue, 10 Aug 2021 22:39:35 UTC |
	|         | ssh -n                                                      |                                    |         |         |                               |                               |
	|         | multinode-20210810223223-30291-m02                          |                                    |         |         |                               |                               |
	|         | sudo cat /home/docker/cp-test.txt                           |                                    |         |         |                               |                               |
	| -p      | multinode-20210810223223-30291 cp testdata/cp-test.txt      | multinode-20210810223223-30291     | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:39:35 UTC | Tue, 10 Aug 2021 22:39:35 UTC |
	|         | multinode-20210810223223-30291-m03:/home/docker/cp-test.txt |                                    |         |         |                               |                               |
	| -p      | multinode-20210810223223-30291                              | multinode-20210810223223-30291     | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:39:35 UTC | Tue, 10 Aug 2021 22:39:35 UTC |
	|         | ssh -n                                                      |                                    |         |         |                               |                               |
	|         | multinode-20210810223223-30291-m03                          |                                    |         |         |                               |                               |
	|         | sudo cat /home/docker/cp-test.txt                           |                                    |         |         |                               |                               |
	| -p      | multinode-20210810223223-30291                              | multinode-20210810223223-30291     | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:39:35 UTC | Tue, 10 Aug 2021 22:39:37 UTC |
	|         | node stop m03                                               |                                    |         |         |                               |                               |
	| -p      | multinode-20210810223223-30291                              | multinode-20210810223223-30291     | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:39:38 UTC | Tue, 10 Aug 2021 22:40:28 UTC |
	|         | node start m03                                              |                                    |         |         |                               |                               |
	|         | --alsologtostderr                                           |                                    |         |         |                               |                               |
	| stop    | -p                                                          | multinode-20210810223223-30291     | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:40:29 UTC | Tue, 10 Aug 2021 22:40:36 UTC |
	|         | multinode-20210810223223-30291                              |                                    |         |         |                               |                               |
	| start   | -p                                                          | multinode-20210810223223-30291     | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:40:36 UTC | Tue, 10 Aug 2021 22:43:27 UTC |
	|         | multinode-20210810223223-30291                              |                                    |         |         |                               |                               |
	|         | --wait=true -v=8                                            |                                    |         |         |                               |                               |
	|         | --alsologtostderr                                           |                                    |         |         |                               |                               |
	| -p      | multinode-20210810223223-30291                              | multinode-20210810223223-30291     | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:43:27 UTC | Tue, 10 Aug 2021 22:43:28 UTC |
	|         | node delete m03                                             |                                    |         |         |                               |                               |
	| -p      | multinode-20210810223223-30291                              | multinode-20210810223223-30291     | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:43:29 UTC | Tue, 10 Aug 2021 22:43:34 UTC |
	|         | stop                                                        |                                    |         |         |                               |                               |
	| start   | -p                                                          | multinode-20210810223223-30291     | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:43:34 UTC | Tue, 10 Aug 2021 22:45:31 UTC |
	|         | multinode-20210810223223-30291                              |                                    |         |         |                               |                               |
	|         | --wait=true -v=8                                            |                                    |         |         |                               |                               |
	|         | --alsologtostderr                                           |                                    |         |         |                               |                               |
	|         | --driver=kvm2                                               |                                    |         |         |                               |                               |
	|         | --container-runtime=crio                                    |                                    |         |         |                               |                               |
	| start   | -p                                                          | multinode-20210810223223-30291-m03 | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:45:31 UTC | Tue, 10 Aug 2021 22:46:30 UTC |
	|         | multinode-20210810223223-30291-m03                          |                                    |         |         |                               |                               |
	|         | --driver=kvm2                                               |                                    |         |         |                               |                               |
	|         | --container-runtime=crio                                    |                                    |         |         |                               |                               |
	| delete  | -p                                                          | multinode-20210810223223-30291-m03 | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:46:31 UTC | Tue, 10 Aug 2021 22:46:32 UTC |
	|         | multinode-20210810223223-30291-m03                          |                                    |         |         |                               |                               |
	| -p      | multinode-20210810223223-30291                              | multinode-20210810223223-30291     | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:46:32 UTC | Tue, 10 Aug 2021 22:46:33 UTC |
	|         | logs -n 25                                                  |                                    |         |         |                               |                               |
	| delete  | -p                                                          | multinode-20210810223223-30291     | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:46:34 UTC | Tue, 10 Aug 2021 22:46:35 UTC |
	|         | multinode-20210810223223-30291                              |                                    |         |         |                               |                               |
	| start   | -p                                                          | test-preload-20210810224820-30291  | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:48:20 UTC | Tue, 10 Aug 2021 22:50:23 UTC |
	|         | test-preload-20210810224820-30291                           |                                    |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                             |                                    |         |         |                               |                               |
	|         | --wait=true --preload=false                                 |                                    |         |         |                               |                               |
	|         | --driver=kvm2                                               |                                    |         |         |                               |                               |
	|         | --container-runtime=crio                                    |                                    |         |         |                               |                               |
	|         | --kubernetes-version=v1.17.0                                |                                    |         |         |                               |                               |
	| ssh     | -p                                                          | test-preload-20210810224820-30291  | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:50:23 UTC | Tue, 10 Aug 2021 22:50:26 UTC |
	|         | test-preload-20210810224820-30291                           |                                    |         |         |                               |                               |
	|         | -- sudo crictl pull busybox                                 |                                    |         |         |                               |                               |
	| start   | -p                                                          | test-preload-20210810224820-30291  | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:50:26 UTC | Tue, 10 Aug 2021 22:51:08 UTC |
	|         | test-preload-20210810224820-30291                           |                                    |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                             |                                    |         |         |                               |                               |
	|         | -v=1 --wait=true --driver=kvm2                              |                                    |         |         |                               |                               |
	|         |  --container-runtime=crio                                   |                                    |         |         |                               |                               |
	|         | --kubernetes-version=v1.17.3                                |                                    |         |         |                               |                               |
	| ssh     | -p                                                          | test-preload-20210810224820-30291  | jenkins | v1.22.0 | Tue, 10 Aug 2021 22:51:08 UTC | Tue, 10 Aug 2021 22:51:09 UTC |
	|         | test-preload-20210810224820-30291                           |                                    |         |         |                               |                               |
	|         | -- sudo crictl image ls                                     |                                    |         |         |                               |                               |
	|---------|-------------------------------------------------------------|------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/10 22:50:26
	Running on machine: debian-jenkins-agent-3
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0810 22:50:26.964178   32288 out.go:298] Setting OutFile to fd 1 ...
	I0810 22:50:26.964264   32288 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:50:26.964270   32288 out.go:311] Setting ErrFile to fd 2...
	I0810 22:50:26.964274   32288 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:50:26.964379   32288 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin
	I0810 22:50:26.964651   32288 out.go:305] Setting JSON to false
	I0810 22:50:27.001781   32288 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-3","uptime":9187,"bootTime":1628626640,"procs":164,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0810 22:50:27.001866   32288 start.go:121] virtualization: kvm guest
	I0810 22:50:27.004435   32288 out.go:177] * [test-preload-20210810224820-30291] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0810 22:50:27.004559   32288 notify.go:169] Checking for updates...
	I0810 22:50:27.006026   32288 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0810 22:50:27.007443   32288 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0810 22:50:27.008814   32288 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	I0810 22:50:27.010094   32288 out.go:177]   - MINIKUBE_LOCATION=12230
	I0810 22:50:27.010897   32288 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:50:27.011009   32288 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:50:27.022145   32288 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41341
	I0810 22:50:27.022561   32288 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:50:27.023148   32288 main.go:130] libmachine: Using API Version  1
	I0810 22:50:27.023179   32288 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:50:27.023582   32288 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:50:27.023744   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .DriverName
	I0810 22:50:27.025527   32288 out.go:177] * Kubernetes 1.21.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.21.3
	I0810 22:50:27.025575   32288 driver.go:335] Setting default libvirt URI to qemu:///system
	I0810 22:50:27.025927   32288 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:50:27.025967   32288 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:50:27.037460   32288 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42501
	I0810 22:50:27.038555   32288 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:50:27.039605   32288 main.go:130] libmachine: Using API Version  1
	I0810 22:50:27.039627   32288 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:50:27.039985   32288 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:50:27.040180   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .DriverName
	I0810 22:50:27.069131   32288 out.go:177] * Using the kvm2 driver based on existing profile
	I0810 22:50:27.069155   32288 start.go:278] selected driver: kvm2
	I0810 22:50:27.069162   32288 start.go:751] validating driver "kvm2" against &{Name:test-preload-20210810224820-30291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12122/minikube-v1.22.0-1628238775-12122.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.0 Cl
usterName:test-preload-20210810224820-30291 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.38 Port:8443 KubernetesVersion:v1.17.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0810 22:50:27.069256   32288 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0810 22:50:27.070259   32288 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:50:27.070464   32288 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0810 22:50:27.081432   32288 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0810 22:50:27.081729   32288 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0810 22:50:27.081755   32288 cni.go:93] Creating CNI manager for ""
	I0810 22:50:27.081765   32288 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0810 22:50:27.081780   32288 start_flags.go:277] config:
	{Name:test-preload-20210810224820-30291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12122/minikube-v1.22.0-1628238775-12122.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.3 ClusterName:test-preload-20210810224820-30291 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.38 Port:8443 KubernetesVersion:v1.17.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0810 22:50:27.081873   32288 iso.go:123] acquiring lock: {Name:mke8829815ca14456120fefc524d0a056bf82da0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:50:27.083917   32288 out.go:177] * Starting control plane node test-preload-20210810224820-30291 in cluster test-preload-20210810224820-30291
	I0810 22:50:27.083940   32288 preload.go:131] Checking if preload exists for k8s version v1.17.3 and runtime crio
	W0810 22:50:27.150423   32288 preload.go:114] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.17.3-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0810 22:50:27.150621   32288 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/test-preload-20210810224820-30291/config.json ...
	I0810 22:50:27.150779   32288 cache.go:108] acquiring lock: {Name:mke954c7d9774c49025937331a3dc2de2a6c3cb5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:50:27.150839   32288 cache.go:205] Successfully downloaded all kic artifacts
	I0810 22:50:27.150852   32288 cache.go:108] acquiring lock: {Name:mkec4bb76093efeaa18724ad69ff92dc68ba125a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:50:27.150839   32288 cache.go:108] acquiring lock: {Name:mk7f0957b6522dd4a59703cbcacb5af51128e30f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:50:27.150876   32288 start.go:313] acquiring machines lock for test-preload-20210810224820-30291: {Name:mk9647f7c84b24381af0d3e731fd883065efc3b8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0810 22:50:27.150811   32288 cache.go:108] acquiring lock: {Name:mkd68868c6f3152a4f0f8ea5338d071e7774615a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:50:27.150886   32288 cache.go:108] acquiring lock: {Name:mk10458965bc84085a9698fe6d09a2967145d153 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:50:27.150861   32288 cache.go:108] acquiring lock: {Name:mk7ef21614cb7482543d2a19e506f61e3206af54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:50:27.150779   32288 cache.go:108] acquiring lock: {Name:mk1c4cef81440134350587e783f82d4d62394eca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:50:27.150936   32288 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 exists
	I0810 22:50:27.150987   32288 cache.go:97] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.4" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4" took 206.275µs
	I0810 22:50:27.150982   32288 cache.go:108] acquiring lock: {Name:mkeba9ce1f19ef97a3f170beebaaf10fddad4866 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:50:27.151013   32288 cache.go:81] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.4 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 succeeded
	I0810 22:50:27.151037   32288 start.go:317] acquired machines lock for "test-preload-20210810224820-30291" in 140.622µs
	I0810 22:50:27.151059   32288 start.go:93] Skipping create...Using existing machine configuration
	I0810 22:50:27.151073   32288 fix.go:55] fixHost starting: 
	I0810 22:50:27.151047   32288 image.go:133] retrieving image: k8s.gcr.io/kube-scheduler:v1.17.3
	I0810 22:50:27.151082   32288 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/coredns_1.6.5 exists
	I0810 22:50:27.151095   32288 image.go:133] retrieving image: k8s.gcr.io/kube-proxy:v1.17.3
	I0810 22:50:27.151100   32288 cache.go:97] cache image "k8s.gcr.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/coredns_1.6.5" took 287.76µs
	I0810 22:50:27.151102   32288 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/pause_3.1 exists
	I0810 22:50:27.151115   32288 cache.go:81] save to tar file k8s.gcr.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/coredns_1.6.5 succeeded
	I0810 22:50:27.151125   32288 cache.go:97] cache image "k8s.gcr.io/pause:3.1" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/pause_3.1" took 360.672µs
	I0810 22:50:27.151157   32288 image.go:133] retrieving image: k8s.gcr.io/kube-controller-manager:v1.17.3
	I0810 22:50:27.151166   32288 cache.go:81] save to tar file k8s.gcr.io/pause:3.1 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/pause_3.1 succeeded
	I0810 22:50:27.151136   32288 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 exists
	I0810 22:50:27.150892   32288 cache.go:108] acquiring lock: {Name:mkb941683c391d41ef457dc15d9b1932efcf0178 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:50:27.151196   32288 cache.go:97] cache image "k8s.gcr.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0" took 233.007µs
	I0810 22:50:27.151266   32288 cache.go:81] save to tar file k8s.gcr.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 succeeded
	I0810 22:50:27.151098   32288 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 exists
	I0810 22:50:27.151290   32288 cache.go:97] cache image "docker.io/kubernetesui/dashboard:v2.1.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0" took 404.482µs
	I0810 22:50:27.151298   32288 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0810 22:50:27.151309   32288 cache.go:81] save to tar file docker.io/kubernetesui/dashboard:v2.1.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 succeeded
	I0810 22:50:27.151322   32288 cache.go:97] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 450.148µs
	I0810 22:50:27.151345   32288 cache.go:81] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0810 22:50:27.151550   32288 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:50:27.151596   32288 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:50:27.150888   32288 cache.go:108] acquiring lock: {Name:mkf276c292a45c7458c003a71683a2395f53b911 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:50:27.151815   32288 image.go:133] retrieving image: k8s.gcr.io/kube-apiserver:v1.17.3
	I0810 22:50:27.152177   32288 image.go:175] daemon lookup for k8s.gcr.io/kube-proxy:v1.17.3: Error response from daemon: reference does not exist
	I0810 22:50:27.152178   32288 image.go:175] daemon lookup for k8s.gcr.io/kube-scheduler:v1.17.3: Error response from daemon: reference does not exist
	I0810 22:50:27.152269   32288 image.go:175] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.17.3: Error response from daemon: reference does not exist
	I0810 22:50:27.152480   32288 image.go:175] daemon lookup for k8s.gcr.io/kube-apiserver:v1.17.3: Error response from daemon: reference does not exist
	I0810 22:50:27.163300   32288 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:36923
	I0810 22:50:27.163675   32288 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:50:27.164166   32288 main.go:130] libmachine: Using API Version  1
	I0810 22:50:27.164185   32288 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:50:27.164579   32288 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:50:27.164780   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .DriverName
	I0810 22:50:27.164925   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .GetState
	I0810 22:50:27.168081   32288 fix.go:108] recreateIfNeeded on test-preload-20210810224820-30291: state=Running err=<nil>
	W0810 22:50:27.168111   32288 fix.go:134] unexpected machine state, will restart: <nil>
	I0810 22:50:27.170377   32288 out.go:177] * Updating the running kvm2 "test-preload-20210810224820-30291" VM ...
	I0810 22:50:27.170406   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .DriverName
	I0810 22:50:27.170573   32288 machine.go:88] provisioning docker machine ...
	I0810 22:50:27.170593   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .DriverName
	I0810 22:50:27.170753   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .GetMachineName
	I0810 22:50:27.170881   32288 buildroot.go:166] provisioning hostname "test-preload-20210810224820-30291"
	I0810 22:50:27.170899   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .GetMachineName
	I0810 22:50:27.171019   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .GetSSHHostname
	I0810 22:50:27.175459   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) DBG | domain test-preload-20210810224820-30291 has defined MAC address 52:54:00:70:8b:73 in network mk-test-preload-20210810224820-30291
	I0810 22:50:27.175821   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:8b:73", ip: ""} in network mk-test-preload-20210810224820-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:48:38 +0000 UTC Type:0 Mac:52:54:00:70:8b:73 Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:test-preload-20210810224820-30291 Clientid:01:52:54:00:70:8b:73}
	I0810 22:50:27.175859   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) DBG | domain test-preload-20210810224820-30291 has defined IP address 192.168.50.38 and MAC address 52:54:00:70:8b:73 in network mk-test-preload-20210810224820-30291
	I0810 22:50:27.175955   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .GetSSHPort
	I0810 22:50:27.176134   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .GetSSHKeyPath
	I0810 22:50:27.176266   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .GetSSHKeyPath
	I0810 22:50:27.176375   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .GetSSHUsername
	I0810 22:50:27.176501   32288 main.go:130] libmachine: Using SSH client type: native
	I0810 22:50:27.176632   32288 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.50.38 22 <nil> <nil>}
	I0810 22:50:27.176644   32288 main.go:130] libmachine: About to run SSH command:
	sudo hostname test-preload-20210810224820-30291 && echo "test-preload-20210810224820-30291" | sudo tee /etc/hostname
	I0810 22:50:27.316608   32288 main.go:130] libmachine: SSH cmd err, output: <nil>: test-preload-20210810224820-30291
	
	I0810 22:50:27.316636   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .GetSSHHostname
	I0810 22:50:27.322050   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) DBG | domain test-preload-20210810224820-30291 has defined MAC address 52:54:00:70:8b:73 in network mk-test-preload-20210810224820-30291
	I0810 22:50:27.322381   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:8b:73", ip: ""} in network mk-test-preload-20210810224820-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:48:38 +0000 UTC Type:0 Mac:52:54:00:70:8b:73 Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:test-preload-20210810224820-30291 Clientid:01:52:54:00:70:8b:73}
	I0810 22:50:27.322415   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) DBG | domain test-preload-20210810224820-30291 has defined IP address 192.168.50.38 and MAC address 52:54:00:70:8b:73 in network mk-test-preload-20210810224820-30291
	I0810 22:50:27.322570   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .GetSSHPort
	I0810 22:50:27.322771   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .GetSSHKeyPath
	I0810 22:50:27.322939   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .GetSSHKeyPath
	I0810 22:50:27.323103   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .GetSSHUsername
	I0810 22:50:27.323227   32288 main.go:130] libmachine: Using SSH client type: native
	I0810 22:50:27.323385   32288 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.50.38 22 <nil> <nil>}
	I0810 22:50:27.323413   32288 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-20210810224820-30291' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-20210810224820-30291/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-20210810224820-30291' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0810 22:50:27.457124   32288 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0810 22:50:27.457155   32288 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem ServerCertRemotePath:/etc/
docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube}
	I0810 22:50:27.457176   32288 buildroot.go:174] setting up certificates
	I0810 22:50:27.457195   32288 provision.go:83] configureAuth start
	I0810 22:50:27.457208   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .GetMachineName
	I0810 22:50:27.457466   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .GetIP
	I0810 22:50:27.462549   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) DBG | domain test-preload-20210810224820-30291 has defined MAC address 52:54:00:70:8b:73 in network mk-test-preload-20210810224820-30291
	I0810 22:50:27.462951   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:8b:73", ip: ""} in network mk-test-preload-20210810224820-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:48:38 +0000 UTC Type:0 Mac:52:54:00:70:8b:73 Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:test-preload-20210810224820-30291 Clientid:01:52:54:00:70:8b:73}
	I0810 22:50:27.462983   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) DBG | domain test-preload-20210810224820-30291 has defined IP address 192.168.50.38 and MAC address 52:54:00:70:8b:73 in network mk-test-preload-20210810224820-30291
	I0810 22:50:27.463056   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .GetSSHHostname
	I0810 22:50:27.467145   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) DBG | domain test-preload-20210810224820-30291 has defined MAC address 52:54:00:70:8b:73 in network mk-test-preload-20210810224820-30291
	I0810 22:50:27.467561   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:8b:73", ip: ""} in network mk-test-preload-20210810224820-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:48:38 +0000 UTC Type:0 Mac:52:54:00:70:8b:73 Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:test-preload-20210810224820-30291 Clientid:01:52:54:00:70:8b:73}
	I0810 22:50:27.467594   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) DBG | domain test-preload-20210810224820-30291 has defined IP address 192.168.50.38 and MAC address 52:54:00:70:8b:73 in network mk-test-preload-20210810224820-30291
	I0810 22:50:27.467736   32288 provision.go:137] copyHostCerts
	I0810 22:50:27.467806   32288 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem, removing ...
	I0810 22:50:27.467822   32288 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem
	I0810 22:50:27.467894   32288 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem (1082 bytes)
	I0810 22:50:27.467984   32288 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem, removing ...
	I0810 22:50:27.467994   32288 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem
	I0810 22:50:27.468022   32288 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem (1123 bytes)
	I0810 22:50:27.468074   32288 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem, removing ...
	I0810 22:50:27.468083   32288 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem
	I0810 22:50:27.468103   32288 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem (1679 bytes)
	I0810 22:50:27.468187   32288 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem org=jenkins.test-preload-20210810224820-30291 san=[192.168.50.38 192.168.50.38 localhost 127.0.0.1 minikube test-preload-20210810224820-30291]
	I0810 22:50:27.474663   32288 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.3
	I0810 22:50:27.489728   32288 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.3
	I0810 22:50:27.495959   32288 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.3
	I0810 22:50:27.501520   32288 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.3
	I0810 22:50:27.858487   32288 provision.go:171] copyRemoteCerts
	I0810 22:50:27.858551   32288 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0810 22:50:27.858584   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .GetSSHHostname
	I0810 22:50:27.864544   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) DBG | domain test-preload-20210810224820-30291 has defined MAC address 52:54:00:70:8b:73 in network mk-test-preload-20210810224820-30291
	I0810 22:50:27.864963   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:8b:73", ip: ""} in network mk-test-preload-20210810224820-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:48:38 +0000 UTC Type:0 Mac:52:54:00:70:8b:73 Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:test-preload-20210810224820-30291 Clientid:01:52:54:00:70:8b:73}
	I0810 22:50:27.864996   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) DBG | domain test-preload-20210810224820-30291 has defined IP address 192.168.50.38 and MAC address 52:54:00:70:8b:73 in network mk-test-preload-20210810224820-30291
	I0810 22:50:27.865164   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .GetSSHPort
	I0810 22:50:27.865366   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .GetSSHKeyPath
	I0810 22:50:27.865503   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .GetSSHUsername
	I0810 22:50:27.865659   32288 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/test-preload-20210810224820-30291/id_rsa Username:docker}
	I0810 22:50:27.963774   32288 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0810 22:50:27.990161   32288 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0810 22:50:28.012145   32288 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0810 22:50:28.030280   32288 provision.go:86] duration metric: configureAuth took 573.075381ms
	I0810 22:50:28.030304   32288 buildroot.go:189] setting minikube options for container-runtime
	I0810 22:50:28.030523   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .GetSSHHostname
	I0810 22:50:28.035638   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) DBG | domain test-preload-20210810224820-30291 has defined MAC address 52:54:00:70:8b:73 in network mk-test-preload-20210810224820-30291
	I0810 22:50:28.035967   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:8b:73", ip: ""} in network mk-test-preload-20210810224820-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:48:38 +0000 UTC Type:0 Mac:52:54:00:70:8b:73 Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:test-preload-20210810224820-30291 Clientid:01:52:54:00:70:8b:73}
	I0810 22:50:28.036025   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) DBG | domain test-preload-20210810224820-30291 has defined IP address 192.168.50.38 and MAC address 52:54:00:70:8b:73 in network mk-test-preload-20210810224820-30291
	I0810 22:50:28.036154   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .GetSSHPort
	I0810 22:50:28.036366   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .GetSSHKeyPath
	I0810 22:50:28.036556   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .GetSSHKeyPath
	I0810 22:50:28.036717   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .GetSSHUsername
	I0810 22:50:28.036882   32288 main.go:130] libmachine: Using SSH client type: native
	I0810 22:50:28.037016   32288 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.50.38 22 <nil> <nil>}
	I0810 22:50:28.037030   32288 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0810 22:50:28.310214   32288 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.3 exists
	I0810 22:50:28.310304   32288 cache.go:97] cache image "k8s.gcr.io/kube-scheduler:v1.17.3" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.3" took 1.159451302s
	I0810 22:50:28.310326   32288 cache.go:81] save to tar file k8s.gcr.io/kube-scheduler:v1.17.3 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.3 succeeded
	I0810 22:50:28.558670   32288 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.3 exists
	I0810 22:50:28.558726   32288 cache.go:97] cache image "k8s.gcr.io/kube-controller-manager:v1.17.3" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.3" took 1.407944654s
	I0810 22:50:28.558741   32288 cache.go:81] save to tar file k8s.gcr.io/kube-controller-manager:v1.17.3 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.3 succeeded
	I0810 22:50:28.559365   32288 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.3 exists
	I0810 22:50:28.559406   32288 cache.go:97] cache image "k8s.gcr.io/kube-apiserver:v1.17.3" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.3" took 1.408538272s
	I0810 22:50:28.559424   32288 cache.go:81] save to tar file k8s.gcr.io/kube-apiserver:v1.17.3 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.3 succeeded
	I0810 22:50:29.030767   32288 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.3 exists
	I0810 22:50:29.030822   32288 cache.go:97] cache image "k8s.gcr.io/kube-proxy:v1.17.3" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.3" took 1.879961468s
	I0810 22:50:29.030842   32288 cache.go:81] save to tar file k8s.gcr.io/kube-proxy:v1.17.3 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.3 succeeded
	I0810 22:50:29.030864   32288 cache.go:88] Successfully saved all images to host disk.
	I0810 22:50:29.237381   32288 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0810 22:50:29.237414   32288 machine.go:91] provisioned docker machine in 2.06682756s
	I0810 22:50:29.237427   32288 start.go:267] post-start starting for "test-preload-20210810224820-30291" (driver="kvm2")
	I0810 22:50:29.237444   32288 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0810 22:50:29.237469   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .DriverName
	I0810 22:50:29.237849   32288 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0810 22:50:29.237886   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .GetSSHHostname
	I0810 22:50:29.243521   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) DBG | domain test-preload-20210810224820-30291 has defined MAC address 52:54:00:70:8b:73 in network mk-test-preload-20210810224820-30291
	I0810 22:50:29.243885   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:8b:73", ip: ""} in network mk-test-preload-20210810224820-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:48:38 +0000 UTC Type:0 Mac:52:54:00:70:8b:73 Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:test-preload-20210810224820-30291 Clientid:01:52:54:00:70:8b:73}
	I0810 22:50:29.243913   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) DBG | domain test-preload-20210810224820-30291 has defined IP address 192.168.50.38 and MAC address 52:54:00:70:8b:73 in network mk-test-preload-20210810224820-30291
	I0810 22:50:29.244076   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .GetSSHPort
	I0810 22:50:29.244287   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .GetSSHKeyPath
	I0810 22:50:29.244449   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .GetSSHUsername
	I0810 22:50:29.244557   32288 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/test-preload-20210810224820-30291/id_rsa Username:docker}
	I0810 22:50:29.339581   32288 ssh_runner.go:149] Run: cat /etc/os-release
	I0810 22:50:29.344623   32288 info.go:137] Remote host: Buildroot 2020.02.12
	I0810 22:50:29.344646   32288 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/addons for local assets ...
	I0810 22:50:29.344694   32288 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files for local assets ...
	I0810 22:50:29.344774   32288 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/302912.pem -> 302912.pem in /etc/ssl/certs
	I0810 22:50:29.344873   32288 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0810 22:50:29.352106   32288 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/302912.pem --> /etc/ssl/certs/302912.pem (1708 bytes)
	I0810 22:50:29.369674   32288 start.go:270] post-start completed in 132.221284ms
	I0810 22:50:29.369719   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .DriverName
	I0810 22:50:29.369949   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .GetSSHHostname
	I0810 22:50:29.374583   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) DBG | domain test-preload-20210810224820-30291 has defined MAC address 52:54:00:70:8b:73 in network mk-test-preload-20210810224820-30291
	I0810 22:50:29.374950   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:8b:73", ip: ""} in network mk-test-preload-20210810224820-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:48:38 +0000 UTC Type:0 Mac:52:54:00:70:8b:73 Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:test-preload-20210810224820-30291 Clientid:01:52:54:00:70:8b:73}
	I0810 22:50:29.374990   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) DBG | domain test-preload-20210810224820-30291 has defined IP address 192.168.50.38 and MAC address 52:54:00:70:8b:73 in network mk-test-preload-20210810224820-30291
	I0810 22:50:29.375141   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .GetSSHPort
	I0810 22:50:29.375332   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .GetSSHKeyPath
	I0810 22:50:29.375458   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .GetSSHKeyPath
	I0810 22:50:29.375571   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .GetSSHUsername
	I0810 22:50:29.375737   32288 main.go:130] libmachine: Using SSH client type: native
	I0810 22:50:29.375900   32288 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.50.38 22 <nil> <nil>}
	I0810 22:50:29.375915   32288 main.go:130] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0810 22:50:29.509261   32288 main.go:130] libmachine: SSH cmd err, output: <nil>: 1628635829.510567305
	
	I0810 22:50:29.509288   32288 fix.go:212] guest clock: 1628635829.510567305
	I0810 22:50:29.509298   32288 fix.go:225] Guest: 2021-08-10 22:50:29.510567305 +0000 UTC Remote: 2021-08-10 22:50:29.369932188 +0000 UTC m=+2.449959969 (delta=140.635117ms)
	I0810 22:50:29.509323   32288 fix.go:196] guest clock delta is within tolerance: 140.635117ms
	I0810 22:50:29.509329   32288 fix.go:57] fixHost completed within 2.358260247s
	I0810 22:50:29.509336   32288 start.go:80] releasing machines lock for "test-preload-20210810224820-30291", held for 2.358288702s
	I0810 22:50:29.509374   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .DriverName
	I0810 22:50:29.509622   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .GetIP
	I0810 22:50:29.514739   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) DBG | domain test-preload-20210810224820-30291 has defined MAC address 52:54:00:70:8b:73 in network mk-test-preload-20210810224820-30291
	I0810 22:50:29.515045   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:8b:73", ip: ""} in network mk-test-preload-20210810224820-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:48:38 +0000 UTC Type:0 Mac:52:54:00:70:8b:73 Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:test-preload-20210810224820-30291 Clientid:01:52:54:00:70:8b:73}
	I0810 22:50:29.515082   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) DBG | domain test-preload-20210810224820-30291 has defined IP address 192.168.50.38 and MAC address 52:54:00:70:8b:73 in network mk-test-preload-20210810224820-30291
	I0810 22:50:29.515261   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .DriverName
	I0810 22:50:29.515441   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .DriverName
	I0810 22:50:29.515945   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .DriverName
	I0810 22:50:29.516235   32288 ssh_runner.go:149] Run: systemctl --version
	I0810 22:50:29.516256   32288 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0810 22:50:29.516269   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .GetSSHHostname
	I0810 22:50:29.516290   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .GetSSHHostname
	I0810 22:50:29.522872   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) DBG | domain test-preload-20210810224820-30291 has defined MAC address 52:54:00:70:8b:73 in network mk-test-preload-20210810224820-30291
	I0810 22:50:29.523815   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) DBG | domain test-preload-20210810224820-30291 has defined MAC address 52:54:00:70:8b:73 in network mk-test-preload-20210810224820-30291
	I0810 22:50:29.523825   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .GetSSHPort
	I0810 22:50:29.523840   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:8b:73", ip: ""} in network mk-test-preload-20210810224820-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:48:38 +0000 UTC Type:0 Mac:52:54:00:70:8b:73 Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:test-preload-20210810224820-30291 Clientid:01:52:54:00:70:8b:73}
	I0810 22:50:29.523861   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .GetSSHPort
	I0810 22:50:29.523861   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) DBG | domain test-preload-20210810224820-30291 has defined IP address 192.168.50.38 and MAC address 52:54:00:70:8b:73 in network mk-test-preload-20210810224820-30291
	I0810 22:50:29.523942   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:8b:73", ip: ""} in network mk-test-preload-20210810224820-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:48:38 +0000 UTC Type:0 Mac:52:54:00:70:8b:73 Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:test-preload-20210810224820-30291 Clientid:01:52:54:00:70:8b:73}
	I0810 22:50:29.523972   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) DBG | domain test-preload-20210810224820-30291 has defined IP address 192.168.50.38 and MAC address 52:54:00:70:8b:73 in network mk-test-preload-20210810224820-30291
	I0810 22:50:29.523993   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .GetSSHKeyPath
	I0810 22:50:29.524016   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .GetSSHKeyPath
	I0810 22:50:29.524187   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .GetSSHUsername
	I0810 22:50:29.524188   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .GetSSHUsername
	I0810 22:50:29.524359   32288 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/test-preload-20210810224820-30291/id_rsa Username:docker}
	I0810 22:50:29.524360   32288 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/test-preload-20210810224820-30291/id_rsa Username:docker}
	I0810 22:50:29.613612   32288 preload.go:131] Checking if preload exists for k8s version v1.17.3 and runtime crio
	I0810 22:50:29.613688   32288 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0810 22:50:29.630111   32288 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0810 22:50:29.640449   32288 docker.go:153] disabling docker service ...
	I0810 22:50:29.640500   32288 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0810 22:50:29.650372   32288 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0810 22:50:29.662251   32288 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0810 22:50:29.858992   32288 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0810 22:50:30.069610   32288 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0810 22:50:30.079656   32288 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0810 22:50:30.093852   32288 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.1"|' -i /etc/crio/crio.conf"
	I0810 22:50:30.101704   32288 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0810 22:50:30.108557   32288 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0810 22:50:30.115203   32288 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0810 22:50:30.298692   32288 ssh_runner.go:149] Run: sudo systemctl start crio
	I0810 22:50:30.480518   32288 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0810 22:50:30.480597   32288 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0810 22:50:30.490609   32288 start.go:417] Will wait 60s for crictl version
	I0810 22:50:30.490669   32288 ssh_runner.go:149] Run: sudo crictl version
	I0810 22:50:30.561357   32288 start.go:426] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.2
	RuntimeApiVersion:  v1alpha1
	I0810 22:50:30.561479   32288 ssh_runner.go:149] Run: crio --version
	I0810 22:50:30.734477   32288 ssh_runner.go:149] Run: crio --version
	I0810 22:50:30.881618   32288 out.go:177] * Preparing Kubernetes v1.17.3 on CRI-O 1.20.2 ...
	I0810 22:50:30.881662   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .GetIP
	I0810 22:50:30.886661   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) DBG | domain test-preload-20210810224820-30291 has defined MAC address 52:54:00:70:8b:73 in network mk-test-preload-20210810224820-30291
	I0810 22:50:30.886995   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:8b:73", ip: ""} in network mk-test-preload-20210810224820-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:48:38 +0000 UTC Type:0 Mac:52:54:00:70:8b:73 Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:test-preload-20210810224820-30291 Clientid:01:52:54:00:70:8b:73}
	I0810 22:50:30.887028   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) DBG | domain test-preload-20210810224820-30291 has defined IP address 192.168.50.38 and MAC address 52:54:00:70:8b:73 in network mk-test-preload-20210810224820-30291
	I0810 22:50:30.887150   32288 ssh_runner.go:149] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0810 22:50:30.891505   32288 preload.go:131] Checking if preload exists for k8s version v1.17.3 and runtime crio
	I0810 22:50:30.891543   32288 ssh_runner.go:149] Run: sudo crictl images --output json
	I0810 22:50:30.936893   32288 crio.go:420] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.17.3". assuming images are not preloaded.
	I0810 22:50:30.936920   32288 cache_images.go:78] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.17.3 k8s.gcr.io/kube-controller-manager:v1.17.3 k8s.gcr.io/kube-scheduler:v1.17.3 k8s.gcr.io/kube-proxy:v1.17.3 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.4.3-0 k8s.gcr.io/coredns:1.6.5 gcr.io/k8s-minikube/storage-provisioner:v5 docker.io/kubernetesui/dashboard:v2.1.0 docker.io/kubernetesui/metrics-scraper:v1.0.4]
	I0810 22:50:30.936986   32288 image.go:133] retrieving image: docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0810 22:50:30.937010   32288 image.go:133] retrieving image: k8s.gcr.io/kube-controller-manager:v1.17.3
	I0810 22:50:30.937012   32288 image.go:133] retrieving image: k8s.gcr.io/pause:3.1
	I0810 22:50:30.937064   32288 image.go:133] retrieving image: k8s.gcr.io/kube-proxy:v1.17.3
	I0810 22:50:30.937099   32288 image.go:133] retrieving image: k8s.gcr.io/kube-apiserver:v1.17.3
	I0810 22:50:30.937168   32288 image.go:133] retrieving image: docker.io/kubernetesui/dashboard:v2.1.0
	I0810 22:50:30.937073   32288 image.go:133] retrieving image: k8s.gcr.io/kube-scheduler:v1.17.3
	I0810 22:50:30.937174   32288 image.go:133] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0810 22:50:30.937254   32288 image.go:133] retrieving image: k8s.gcr.io/coredns:1.6.5
	I0810 22:50:30.937285   32288 image.go:133] retrieving image: k8s.gcr.io/etcd:3.4.3-0
	I0810 22:50:30.938052   32288 image.go:175] daemon lookup for k8s.gcr.io/kube-proxy:v1.17.3: Error response from daemon: reference does not exist
	I0810 22:50:30.938196   32288 image.go:175] daemon lookup for k8s.gcr.io/kube-apiserver:v1.17.3: Error response from daemon: reference does not exist
	I0810 22:50:30.938312   32288 image.go:175] daemon lookup for k8s.gcr.io/kube-scheduler:v1.17.3: Error response from daemon: reference does not exist
	I0810 22:50:30.955561   32288 image.go:175] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.17.3: Error response from daemon: reference does not exist
	I0810 22:50:30.955704   32288 image.go:171] found k8s.gcr.io/pause:3.1 locally: &{Image:0xc0004e6080}
	I0810 22:50:30.955787   32288 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/pause:3.1
	I0810 22:50:31.235287   32288 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-apiserver:v1.17.3
	I0810 22:50:31.236525   32288 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-scheduler:v1.17.3
	I0810 22:50:31.281012   32288 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-proxy:v1.17.3
	I0810 22:50:31.295384   32288 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-controller-manager:v1.17.3
	I0810 22:50:31.297753   32288 image.go:171] found gcr.io/k8s-minikube/storage-provisioner:v5 locally: &{Image:0xc000200300}
	I0810 22:50:31.297842   32288 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0810 22:50:31.416092   32288 image.go:171] found index.docker.io/kubernetesui/metrics-scraper:v1.0.4 locally: &{Image:0xc000200420}
	I0810 22:50:31.416216   32288 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0810 22:50:31.450467   32288 image.go:171] found k8s.gcr.io/coredns:1.6.5 locally: &{Image:0xc00071c220}
	I0810 22:50:31.450570   32288 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/coredns:1.6.5
	I0810 22:50:31.789543   32288 cache_images.go:106] "k8s.gcr.io/kube-apiserver:v1.17.3" needs transfer: "k8s.gcr.io/kube-apiserver:v1.17.3" does not exist at hash "90d27391b7808cde8d9a81cfa43b1e81de5c4912b4b52a7dccb19eb4fe3c236b" in container runtime
	I0810 22:50:31.789596   32288 cri.go:205] Removing image: k8s.gcr.io/kube-apiserver:v1.17.3
	I0810 22:50:31.789635   32288 cache_images.go:106] "k8s.gcr.io/kube-scheduler:v1.17.3" needs transfer: "k8s.gcr.io/kube-scheduler:v1.17.3" does not exist at hash "d109c0821a2b9225b69b99a95000df5cd1de5d606bc187b3620d730d7769c6ad" in container runtime
	I0810 22:50:31.789677   32288 cri.go:205] Removing image: k8s.gcr.io/kube-scheduler:v1.17.3
	I0810 22:50:31.789722   32288 ssh_runner.go:149] Run: which crictl
	I0810 22:50:31.789641   32288 ssh_runner.go:149] Run: which crictl
	I0810 22:50:31.948647   32288 cache_images.go:106] "k8s.gcr.io/kube-proxy:v1.17.3" needs transfer: "k8s.gcr.io/kube-proxy:v1.17.3" does not exist at hash "ae853e93800dc2572aeb425e5765cf9b25212bfc43695299e61dece06cffa4a1" in container runtime
	I0810 22:50:31.948705   32288 cri.go:205] Removing image: k8s.gcr.io/kube-proxy:v1.17.3
	I0810 22:50:31.948750   32288 ssh_runner.go:149] Run: which crictl
	I0810 22:50:31.948657   32288 cache_images.go:106] "k8s.gcr.io/kube-controller-manager:v1.17.3" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.17.3" does not exist at hash "b0f1517c1f4bb153597033d2efd81a9ac630e6a569307f993b2c0368afcf0302" in container runtime
	I0810 22:50:31.948903   32288 cri.go:205] Removing image: k8s.gcr.io/kube-controller-manager:v1.17.3
	I0810 22:50:31.948965   32288 ssh_runner.go:149] Run: which crictl
	I0810 22:50:32.002798   32288 ssh_runner.go:149] Run: sudo /bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.17.3
	I0810 22:50:32.002856   32288 ssh_runner.go:149] Run: sudo /bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.17.3
	I0810 22:50:32.002881   32288 ssh_runner.go:149] Run: sudo /bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.17.3
	I0810 22:50:32.002916   32288 ssh_runner.go:149] Run: sudo /bin/crictl rmi k8s.gcr.io/kube-proxy:v1.17.3
	I0810 22:50:32.114733   32288 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.3
	I0810 22:50:32.114840   32288 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.17.3
	I0810 22:50:32.114840   32288 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.3
	I0810 22:50:32.114913   32288 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.3
	I0810 22:50:32.114928   32288 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.17.3
	I0810 22:50:32.114934   32288 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.3
	I0810 22:50:32.114980   32288 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.17.3
	I0810 22:50:32.114999   32288 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.17.3
	I0810 22:50:32.122502   32288 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-apiserver_v1.17.3: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.17.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-apiserver_v1.17.3': No such file or directory
	I0810 22:50:32.122527   32288 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.3 --> /var/lib/minikube/images/kube-apiserver_v1.17.3 (50635776 bytes)
	I0810 22:50:32.136232   32288 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-proxy_v1.17.3: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.17.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-proxy_v1.17.3': No such file or directory
	I0810 22:50:32.136262   32288 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.3 --> /var/lib/minikube/images/kube-proxy_v1.17.3 (48706048 bytes)
	I0810 22:50:32.136261   32288 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-controller-manager_v1.17.3: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.17.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-controller-manager_v1.17.3': No such file or directory
	I0810 22:50:32.136287   32288 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.3 --> /var/lib/minikube/images/kube-controller-manager_v1.17.3 (48810496 bytes)
	I0810 22:50:32.141207   32288 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-scheduler_v1.17.3: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.17.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-scheduler_v1.17.3': No such file or directory
	I0810 22:50:32.141236   32288 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.3 --> /var/lib/minikube/images/kube-scheduler_v1.17.3 (33822208 bytes)
	I0810 22:50:32.987934   32288 crio.go:191] Loading image: /var/lib/minikube/images/kube-scheduler_v1.17.3
	I0810 22:50:32.988034   32288 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.17.3
	I0810 22:50:33.328675   32288 image.go:171] found index.docker.io/kubernetesui/dashboard:v2.1.0 locally: &{Image:0xc000200440}
	I0810 22:50:33.328803   32288 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} docker.io/kubernetesui/dashboard:v2.1.0
	I0810 22:50:33.898882   32288 image.go:171] found k8s.gcr.io/etcd:3.4.3-0 locally: &{Image:0xc0002003c0}
	I0810 22:50:33.899017   32288 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/etcd:3.4.3-0
	I0810 22:50:35.878938   32288 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.17.3: (2.890877606s)
	I0810 22:50:35.878970   32288 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.3 from cache
	I0810 22:50:35.878992   32288 crio.go:191] Loading image: /var/lib/minikube/images/kube-apiserver_v1.17.3
	I0810 22:50:35.878998   32288 ssh_runner.go:189] Completed: sudo podman image inspect --format {{.Id}} docker.io/kubernetesui/dashboard:v2.1.0: (2.55016883s)
	I0810 22:50:35.879029   32288 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.17.3
	I0810 22:50:35.879059   32288 ssh_runner.go:189] Completed: sudo podman image inspect --format {{.Id}} k8s.gcr.io/etcd:3.4.3-0: (1.980007229s)
	I0810 22:50:40.964916   32288 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.17.3: (5.085858535s)
	I0810 22:50:40.964953   32288 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.3 from cache
	I0810 22:50:40.964985   32288 crio.go:191] Loading image: /var/lib/minikube/images/kube-proxy_v1.17.3
	I0810 22:50:40.965037   32288 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.17.3
	I0810 22:50:43.663704   32288 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.17.3: (2.69864274s)
	I0810 22:50:43.663733   32288 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.3 from cache
	I0810 22:50:43.663760   32288 crio.go:191] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.17.3
	I0810 22:50:43.663817   32288 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.17.3
	I0810 22:50:48.215401   32288 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.17.3: (4.551554006s)
	I0810 22:50:48.215428   32288 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.3 from cache
	I0810 22:50:48.215454   32288 cache_images.go:113] Successfully loaded all cached images
	I0810 22:50:48.215462   32288 cache_images.go:82] LoadImages completed in 17.278528567s
	I0810 22:50:48.215529   32288 ssh_runner.go:149] Run: crio config
	I0810 22:50:48.527039   32288 cni.go:93] Creating CNI manager for ""
	I0810 22:50:48.527071   32288 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0810 22:50:48.527087   32288 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0810 22:50:48.527106   32288 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.38 APIServerPort:8443 KubernetesVersion:v1.17.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-20210810224820-30291 NodeName:test-preload-20210810224820-30291 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.38"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.50.38 CgroupDriver:systemd ClientCAF
ile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0810 22:50:48.527281   32288 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.38
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "test-preload-20210810224820-30291"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.38
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.38"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.17.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0810 22:50:48.527388   32288 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.17.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --hostname-override=test-preload-20210810224820-30291 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.38 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.17.3 ClusterName:test-preload-20210810224820-30291 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0810 22:50:48.527453   32288 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.17.3
	I0810 22:50:48.537153   32288 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.17.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.17.3': No such file or directory
	
	Initiating transfer...
	I0810 22:50:48.537217   32288 ssh_runner.go:149] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.17.3
	I0810 22:50:48.544681   32288 download.go:92] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.17.3/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.3/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/linux/v1.17.3/kubectl
	I0810 22:50:48.544697   32288 download.go:92] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.17.3/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/linux/v1.17.3/kubelet
	I0810 22:50:48.544687   32288 download.go:92] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.17.3/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/linux/v1.17.3/kubeadm
	I0810 22:50:49.133568   32288 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.17.3/kubectl
	I0810 22:50:49.144687   32288 ssh_runner.go:306] existence check for /var/lib/minikube/binaries/v1.17.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.17.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.17.3/kubectl': No such file or directory
	I0810 22:50:49.144719   32288 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/linux/v1.17.3/kubectl --> /var/lib/minikube/binaries/v1.17.3/kubectl (43499520 bytes)
	I0810 22:50:49.258151   32288 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.17.3/kubeadm
	I0810 22:50:49.294486   32288 ssh_runner.go:306] existence check for /var/lib/minikube/binaries/v1.17.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.17.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.17.3/kubeadm': No such file or directory
	I0810 22:50:49.294554   32288 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/linux/v1.17.3/kubeadm --> /var/lib/minikube/binaries/v1.17.3/kubeadm (39346176 bytes)
	I0810 22:50:49.636071   32288 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0810 22:50:49.647619   32288 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0810 22:50:49.686476   32288 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.17.3/kubelet
	I0810 22:50:49.694711   32288 ssh_runner.go:306] existence check for /var/lib/minikube/binaries/v1.17.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.17.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.17.3/kubelet': No such file or directory
	I0810 22:50:49.694756   32288 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/linux/v1.17.3/kubelet --> /var/lib/minikube/binaries/v1.17.3/kubelet (111584792 bytes)
	I0810 22:50:50.331316   32288 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0810 22:50:50.340763   32288 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (513 bytes)
	I0810 22:50:50.355427   32288 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0810 22:50:50.368742   32288 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2077 bytes)
	I0810 22:50:50.381964   32288 ssh_runner.go:149] Run: grep 192.168.50.38	control-plane.minikube.internal$ /etc/hosts
	I0810 22:50:50.386291   32288 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/test-preload-20210810224820-30291 for IP: 192.168.50.38
	I0810 22:50:50.386346   32288 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key
	I0810 22:50:50.386362   32288 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key
	I0810 22:50:50.386413   32288 certs.go:290] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/test-preload-20210810224820-30291/client.key
	I0810 22:50:50.386431   32288 certs.go:290] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/test-preload-20210810224820-30291/apiserver.key.01da567d
	I0810 22:50:50.386452   32288 certs.go:290] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/test-preload-20210810224820-30291/proxy-client.key
	I0810 22:50:50.386542   32288 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/30291.pem (1338 bytes)
	W0810 22:50:50.386580   32288 certs.go:369] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/30291_empty.pem, impossibly tiny 0 bytes
	I0810 22:50:50.386592   32288 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem (1679 bytes)
	I0810 22:50:50.386617   32288 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem (1082 bytes)
	I0810 22:50:50.386649   32288 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem (1123 bytes)
	I0810 22:50:50.386680   32288 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem (1679 bytes)
	I0810 22:50:50.386743   32288 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/302912.pem (1708 bytes)
	I0810 22:50:50.388074   32288 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/test-preload-20210810224820-30291/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0810 22:50:50.405752   32288 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/test-preload-20210810224820-30291/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0810 22:50:50.423228   32288 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/test-preload-20210810224820-30291/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0810 22:50:50.442233   32288 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/test-preload-20210810224820-30291/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0810 22:50:50.460094   32288 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0810 22:50:50.479134   32288 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0810 22:50:50.496657   32288 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0810 22:50:50.516776   32288 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0810 22:50:50.550582   32288 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/302912.pem --> /usr/share/ca-certificates/302912.pem (1708 bytes)
	I0810 22:50:50.567445   32288 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0810 22:50:50.584193   32288 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/30291.pem --> /usr/share/ca-certificates/30291.pem (1338 bytes)
	I0810 22:50:50.599729   32288 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0810 22:50:50.610870   32288 ssh_runner.go:149] Run: openssl version
	I0810 22:50:50.616557   32288 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/302912.pem && ln -fs /usr/share/ca-certificates/302912.pem /etc/ssl/certs/302912.pem"
	I0810 22:50:50.624577   32288 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/302912.pem
	I0810 22:50:50.631470   32288 certs.go:416] hashing: -rw-r--r-- 1 root root 1708 Aug 10 22:27 /usr/share/ca-certificates/302912.pem
	I0810 22:50:50.631514   32288 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/302912.pem
	I0810 22:50:50.637452   32288 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/302912.pem /etc/ssl/certs/3ec20f2e.0"
	I0810 22:50:50.644177   32288 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0810 22:50:50.652266   32288 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0810 22:50:50.657084   32288 certs.go:416] hashing: -rw-r--r-- 1 root root 1111 Aug 10 22:18 /usr/share/ca-certificates/minikubeCA.pem
	I0810 22:50:50.657123   32288 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0810 22:50:50.662842   32288 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0810 22:50:50.669784   32288 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/30291.pem && ln -fs /usr/share/ca-certificates/30291.pem /etc/ssl/certs/30291.pem"
	I0810 22:50:50.677830   32288 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/30291.pem
	I0810 22:50:50.682702   32288 certs.go:416] hashing: -rw-r--r-- 1 root root 1338 Aug 10 22:27 /usr/share/ca-certificates/30291.pem
	I0810 22:50:50.682742   32288 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/30291.pem
	I0810 22:50:50.688808   32288 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/30291.pem /etc/ssl/certs/51391683.0"
	I0810 22:50:50.695751   32288 kubeadm.go:390] StartCluster: {Name:test-preload-20210810224820-30291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12122/minikube-v1.22.0-1628238775-12122.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.3 ClusterName:test-pre
load-20210810224820-30291 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.38 Port:8443 KubernetesVersion:v1.17.3 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0810 22:50:50.695865   32288 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0810 22:50:50.695909   32288 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0810 22:50:50.733036   32288 cri.go:76] found id: "b9f131c8976914b86f7ef77a7914c676182f1e4fa608de3ce864138068cb90c3"
	I0810 22:50:50.733055   32288 cri.go:76] found id: "dc240072d40333ee1ec6927e58f7048d1efc45f2591096267eb40a14bb891744"
	I0810 22:50:50.733061   32288 cri.go:76] found id: "41003a20744ba44844bac954b01f50201088f18d7455625552f316ad5bf12024"
	I0810 22:50:50.733067   32288 cri.go:76] found id: "4e1d34a786ab389aa821b4f9c92713d5556d250d23142b50dfc40e1fbc8965c4"
	I0810 22:50:50.733072   32288 cri.go:76] found id: "edde24c1a9c539830d810107febdd0ceaffbf9652c9cd6fa9b4faa07af471a26"
	I0810 22:50:50.733078   32288 cri.go:76] found id: "9029fc5bb880e165bc23810305ee69777ad51357946b544b0960039cfc7e8870"
	I0810 22:50:50.733083   32288 cri.go:76] found id: "04efcb33acc3b6322ec09f6ab03cc72fb4eef3851be315661768b2c28b7580ca"
	I0810 22:50:50.733089   32288 cri.go:76] found id: "c53eed59215f6778e54e6b1db239641870d0f6cbb1005d78553d81db5c7c652d"
	I0810 22:50:50.733095   32288 cri.go:76] found id: ""
	I0810 22:50:50.733131   32288 ssh_runner.go:149] Run: sudo runc list -f json
	I0810 22:50:50.773315   32288 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"04efcb33acc3b6322ec09f6ab03cc72fb4eef3851be315661768b2c28b7580ca","pid":3574,"status":"running","bundle":"/run/containers/storage/overlay-containers/04efcb33acc3b6322ec09f6ab03cc72fb4eef3851be315661768b2c28b7580ca/userdata","rootfs":"/var/lib/containers/storage/overlay/510ef1e3b715f6ac485a320be3573e08eed893223d31d04ed9737d787a2ca419/merged","created":"2021-08-10T22:49:47.160840275Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"37ef62f1","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"37ef62f1\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.termina
tionMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"04efcb33acc3b6322ec09f6ab03cc72fb4eef3851be315661768b2c28b7580ca","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-10T22:49:46.909216701Z","io.kubernetes.cri-o.Image":"0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-apiserver:v1.17.0","io.kubernetes.cri-o.ImageRef":"0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-test-preload-20210810224820-30291\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"ef1fd623bd23c3b480f66cfc795fa0ab\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-test-preload-20210810224820-30291_ef1fd623bd23c3b480f66cfc795fa0ab/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":
"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/510ef1e3b715f6ac485a320be3573e08eed893223d31d04ed9737d787a2ca419/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-test-preload-20210810224820-30291_kube-system_ef1fd623bd23c3b480f66cfc795fa0ab_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/f379dea9d5ffcac7e24e23f2e0dab8b1fc4800f05db9951983bce94277e3379a/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"f379dea9d5ffcac7e24e23f2e0dab8b1fc4800f05db9951983bce94277e3379a","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-test-preload-20210810224820-30291_kube-system_ef1fd623bd23c3b480f66cfc795fa0ab_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/ef1fd623bd23c3b480f66cfc795fa0ab/etc-hosts\
",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/ef1fd623bd23c3b480f66cfc795fa0ab/containers/kube-apiserver/c2a9ee37\",\"readonly\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-apiserver-test-preload-20210810224820-30291","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"ef1fd623bd23c3b480f66cfc795fa0ab","kubernetes.io/config.hash":"ef1fd623bd23c3b480f66cfc795fa0ab","kubernetes.io/config.seen":"2021-08-10T22:49:44.337557499Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":
"root"},{"ociVersion":"1.0.2-dev","id":"41003a20744ba44844bac954b01f50201088f18d7455625552f316ad5bf12024","pid":4484,"status":"running","bundle":"/run/containers/storage/overlay-containers/41003a20744ba44844bac954b01f50201088f18d7455625552f316ad5bf12024/userdata","rootfs":"/var/lib/containers/storage/overlay/121259d8e090a39840873b5b69b0796ee2da9c41a396c450ba69c8b8570d1f22/merged","created":"2021-08-10T22:50:13.574363514Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"f9943294","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\
":\"f9943294\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"41003a20744ba44844bac954b01f50201088f18d7455625552f316ad5bf12024","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-10T22:50:13.417092207Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/coredns:1.6.5","io.kubernetes.cri-o.ImageRef":"70f311871ae12c14bd0e02028f249f933f925e4
370744e4e35f706da773a8f61","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-6955765f44-9gbh9\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"8abdfb71-f1fb-4e43-a76f-55549541d3f5\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-6955765f44-9gbh9_8abdfb71-f1fb-4e43-a76f-55549541d3f5/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/121259d8e090a39840873b5b69b0796ee2da9c41a396c450ba69c8b8570d1f22/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-6955765f44-9gbh9_kube-system_8abdfb71-f1fb-4e43-a76f-55549541d3f5_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/51ebf16de90b214fe88bd71ea32d688a4e2b2b7b9c164da070627206dd35a1da/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"51ebf16de90b214fe88bd71ea32d688a4e2b2b7b9c164da070627206dd35a1da","io.kubernetes.cri-o.SandboxNam
e":"k8s_coredns-6955765f44-9gbh9_kube-system_8abdfb71-f1fb-4e43-a76f-55549541d3f5_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/8abdfb71-f1fb-4e43-a76f-55549541d3f5/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/8abdfb71-f1fb-4e43-a76f-55549541d3f5/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/8abdfb71-f1fb-4e43-a76f-55549541d3f5/containers/coredns/7217bf34\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/8abdfb71-f1fb-4e43-a76f-55549541d3f5/volumes/kubernetes.io~secret/coredns-token-s2ntq\",\"readonly\":true}]","io.kubernetes.pod.name":"coredns-6955765f44-
9gbh9","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"8abdfb71-f1fb-4e43-a76f-55549541d3f5","kubernetes.io/config.seen":"2021-08-10T22:50:12.523568229Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4e1d34a786ab389aa821b4f9c92713d5556d250d23142b50dfc40e1fbc8965c4","pid":4160,"status":"running","bundle":"/run/containers/storage/overlay-containers/4e1d34a786ab389aa821b4f9c92713d5556d250d23142b50dfc40e1fbc8965c4/userdata","rootfs":"/var/lib/containers/storage/overlay/0d029a961a83f6d6d05deed608c13722f819ba0801525f0592e7d22252a5ebb2/merged","created":"2021-08-10T22:50:12.356596398Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"ecf76616","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMess
agePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"ecf76616\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"4e1d34a786ab389aa821b4f9c92713d5556d250d23142b50dfc40e1fbc8965c4","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-10T22:50:12.184257752Z","io.kubernetes.cri-o.Image":"7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-proxy:v1.17.0","io.kubernetes.cri-o.ImageRef":"7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-zvm9g\",\"io.kubernetes.p
od.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"26937be5-e3e9-4284-a360-e84c2f175b4f\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-zvm9g_26937be5-e3e9-4284-a360-e84c2f175b4f/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/0d029a961a83f6d6d05deed608c13722f819ba0801525f0592e7d22252a5ebb2/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-zvm9g_kube-system_26937be5-e3e9-4284-a360-e84c2f175b4f_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/8817e7bc2fbab931abbf90e13416d071181d83b00531455160b6a7cc899b2088/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"8817e7bc2fbab931abbf90e13416d071181d83b00531455160b6a7cc899b2088","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-zvm9g_kube-system_26937be5-e3e9-4284-a360-e84c2f175b4f_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.Stdin
Once":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/26937be5-e3e9-4284-a360-e84c2f175b4f/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/26937be5-e3e9-4284-a360-e84c2f175b4f/containers/kube-proxy/bd4d7be3\",\"readonly\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/26937be5-e3e9-4284-a360-e84c2f175b4f/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/26937be5-e3e9-4284-a360-e84c2f175b4f/volumes/kubernetes.io~secret/kube-proxy-token-2qw2p\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-proxy-zvm9
g","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"26937be5-e3e9-4284-a360-e84c2f175b4f","kubernetes.io/config.seen":"2021-08-10T22:50:11.309447742Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"51ebf16de90b214fe88bd71ea32d688a4e2b2b7b9c164da070627206dd35a1da","pid":4388,"status":"running","bundle":"/run/containers/storage/overlay-containers/51ebf16de90b214fe88bd71ea32d688a4e2b2b7b9c164da070627206dd35a1da/userdata","rootfs":"/var/lib/containers/storage/overlay/8186d914da5fa4b2f17f54c03e62d0fee691e83a0d0ecd748bc153ae90ee4aca/merged","created":"2021-08-10T22:50:13.320916823Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-10T22:50:12.523568229Z\",\"kubernetes.io/config.source\":\"a
pi\"}","io.kubernetes.cri-o.CNIResult":"{\"cniVersion\":\"0.4.0\",\"interfaces\":[{\"name\":\"bridge\",\"mac\":\"6e:b8:4e:13:f9:c3\"},{\"name\":\"vethce5150df\",\"mac\":\"ca:5c:a8:e8:06:85\"},{\"name\":\"eth0\",\"mac\":\"3e:76:86:14:23:21\",\"sandbox\":\"/var/run/netns/0a64f832-150b-4427-b97c-31c8ccf17800\"}],\"ips\":[{\"version\":\"4\",\"interface\":2,\"address\":\"10.244.0.2/16\",\"gateway\":\"10.244.0.1\"}],\"routes\":[{\"dst\":\"0.0.0.0/0\",\"gw\":\"10.244.0.1\"}],\"dns\":{}}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-pod8abdfb71_f1fb_4e43_a76f_55549541d3f5.slice","io.kubernetes.cri-o.ContainerID":"51ebf16de90b214fe88bd71ea32d688a4e2b2b7b9c164da070627206dd35a1da","io.kubernetes.cri-o.ContainerName":"k8s_POD_coredns-6955765f44-9gbh9_kube-system_8abdfb71-f1fb-4e43-a76f-55549541d3f5_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-10T22:50:12.971011755Z","io.kubernetes.cri-o.HostName":"coredns-6955765f44-9gbh9","io.kubernetes.cri-o.HostNetwork":"false","io.
kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/51ebf16de90b214fe88bd71ea32d688a4e2b2b7b9c164da070627206dd35a1da/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"coredns-6955765f44-9gbh9","io.kubernetes.cri-o.Labels":"{\"k8s-app\":\"kube-dns\",\"pod-template-hash\":\"6955765f44\",\"io.kubernetes.pod.uid\":\"8abdfb71-f1fb-4e43-a76f-55549541d3f5\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"coredns-6955765f44-9gbh9\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-6955765f44-9gbh9_8abdfb71-f1fb-4e43-a76f-55549541d3f5/51ebf16de90b214fe88bd71ea32d688a4e2b2b7b9c164da070627206dd35a1da.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns-6955765f44-9gbh9\",\"uid\":\"8abdfb71-f1fb-4e43-a76f-55549541d3f5\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/8186d914da5fa4b2f17f54c03e62d0fee6
91e83a0d0ecd748bc153ae90ee4aca/merged","io.kubernetes.cri-o.Name":"k8s_coredns-6955765f44-9gbh9_kube-system_8abdfb71-f1fb-4e43-a76f-55549541d3f5_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"false","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/51ebf16de90b214fe88bd71ea32d688a4e2b2b7b9c164da070627206dd35a1da/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"51ebf16de90b214fe88bd71ea32d688a4e2b2b7b9c164da070627206dd35a1da","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/51ebf16de90b214fe88bd71ea32d688a4e2b2b7b9c164da070627206dd35a1da/userdata/shm","io.kubernetes.pod.name":"coredns-6955765f44-9gbh9","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"8abdfb71-f1fb-4e43-a76f-55549541d3f5","k8s-app":"kube-dns","
kubernetes.io/config.seen":"2021-08-10T22:50:12.523568229Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-hash":"6955765f44"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5d6221697b2798582f8c5d189f1bf1b3372b25cba9c17d3ef7e345aea540944e","pid":5125,"status":"running","bundle":"/run/containers/storage/overlay-containers/5d6221697b2798582f8c5d189f1bf1b3372b25cba9c17d3ef7e345aea540944e/userdata","rootfs":"/var/lib/containers/storage/overlay/1613c47d4f44491bb20b8f5e8999353e1f712d17c9bf3326ea3ac27df271e0d1/merged","created":"2021-08-10T22:50:14.470462735Z","annotations":{"addonmanager.kubernetes.io/mode":"Reconcile","integration-test":"storage-provisioner","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"v1\\\",\\\"kind\\\":\\\"Pod\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"addonmanager.kubernetes.
io/mode\\\":\\\"Reconcile\\\",\\\"integration-test\\\":\\\"storage-provisioner\\\"},\\\"name\\\":\\\"storage-provisioner\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"/storage-provisioner\\\"],\\\"image\\\":\\\"gcr.io/k8s-minikube/storage-provisioner:v5\\\",\\\"imagePullPolicy\\\":\\\"IfNotPresent\\\",\\\"name\\\":\\\"storage-provisioner\\\",\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"}]}],\\\"hostNetwork\\\":true,\\\"serviceAccountName\\\":\\\"storage-provisioner\\\",\\\"volumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/tmp\\\",\\\"type\\\":\\\"Directory\\\"},\\\"name\\\":\\\"tmp\\\"}]}}\\n\",\"kubernetes.io/config.seen\":\"2021-08-10T22:50:13.206951437Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-besteffort-pod9540d6e2_c97c_415d_b6be_20e310fbd70f.slice","io.kubernetes.cri-o.ContainerID":"5d6221697b2798582f8c5d189f1bf1b3372b25cba9c17d3ef7e345aea540944e","io.kubernetes.cri-o.Cont
ainerName":"k8s_POD_storage-provisioner_kube-system_9540d6e2-c97c-415d-b6be-20e310fbd70f_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-10T22:50:14.271147064Z","io.kubernetes.cri-o.HostName":"test-preload-20210810224820-30291","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/5d6221697b2798582f8c5d189f1bf1b3372b25cba9c17d3ef7e345aea540944e/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"storage-provisioner","io.kubernetes.cri-o.Labels":"{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"io.kubernetes.pod.uid\":\"9540d6e2-c97c-415d-b6be-20e310fbd70f\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"integration-test\":\"storage-provisioner\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_9540d6e2-c97c-415d-b6be-20e310fb
d70f/5d6221697b2798582f8c5d189f1bf1b3372b25cba9c17d3ef7e345aea540944e.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\",\"uid\":\"9540d6e2-c97c-415d-b6be-20e310fbd70f\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/1613c47d4f44491bb20b8f5e8999353e1f712d17c9bf3326ea3ac27df271e0d1/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_kube-system_9540d6e2-c97c-415d-b6be-20e310fbd70f_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/5d6221697b2798582f8c5d189f1bf1b3372b25cba9c17d3ef7e345aea540944e/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"5d6221697b2798582f8c5d189f1bf1b3372b25cba9c17d3ef7e345aea540944e","io.kubernetes.cri-o.SeccompProfilePath":"","io.
kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/5d6221697b2798582f8c5d189f1bf1b3372b25cba9c17d3ef7e345aea540944e/userdata/shm","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"9540d6e2-c97c-415d-b6be-20e310fbd70f","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","k
ubernetes.io/config.seen":"2021-08-10T22:50:13.206951437Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8129021bbf4ea8e5b5171ea9104ac69b5e520e6eb2031086365e913888c40529","pid":3470,"status":"running","bundle":"/run/containers/storage/overlay-containers/8129021bbf4ea8e5b5171ea9104ac69b5e520e6eb2031086365e913888c40529/userdata","rootfs":"/var/lib/containers/storage/overlay/c6cb3c65bb9514de7750d0eb1cf898a5e2574dbf429025678dc0a3607e73364c/merged","created":"2021-08-10T22:49:46.44915925Z","annotations":{"component":"kube-scheduler","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-10T22:49:44.337549067Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"bb577061a17ad23cfbbf52e9419bf32a\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-podbb577061a17ad23cfbbf52e9419bf32a.slice","io.kubernetes.c
ri-o.ContainerID":"8129021bbf4ea8e5b5171ea9104ac69b5e520e6eb2031086365e913888c40529","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-scheduler-test-preload-20210810224820-30291_kube-system_bb577061a17ad23cfbbf52e9419bf32a_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-10T22:49:46.232480746Z","io.kubernetes.cri-o.HostName":"test-preload-20210810224820-30291","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/8129021bbf4ea8e5b5171ea9104ac69b5e520e6eb2031086365e913888c40529/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-scheduler-test-preload-20210810224820-30291","io.kubernetes.cri-o.Labels":"{\"component\":\"kube-scheduler\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"bb577061a17ad23cfbbf52e9419bf32a\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-scheduler-test-preload-2021081
0224820-30291\",\"tier\":\"control-plane\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-test-preload-20210810224820-30291_bb577061a17ad23cfbbf52e9419bf32a/8129021bbf4ea8e5b5171ea9104ac69b5e520e6eb2031086365e913888c40529.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler-test-preload-20210810224820-30291\",\"uid\":\"bb577061a17ad23cfbbf52e9419bf32a\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/c6cb3c65bb9514de7750d0eb1cf898a5e2574dbf429025678dc0a3607e73364c/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-test-preload-20210810224820-30291_kube-system_bb577061a17ad23cfbbf52e9419bf32a_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/8129021bbf4ea8e5b5171ea9104ac69b5e520e6eb2031
086365e913888c40529/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"8129021bbf4ea8e5b5171ea9104ac69b5e520e6eb2031086365e913888c40529","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/8129021bbf4ea8e5b5171ea9104ac69b5e520e6eb2031086365e913888c40529/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-test-preload-20210810224820-30291","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"bb577061a17ad23cfbbf52e9419bf32a","kubernetes.io/config.hash":"bb577061a17ad23cfbbf52e9419bf32a","kubernetes.io/config.seen":"2021-08-10T22:49:44.337549067Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8817e7bc2fbab931abbf90e13416d071181d83b00531455160b6a7cc899b2088","pid":4085,"status":"running","bundle":"/run/containers/storage/overlay-containers/8817e7bc2fbab931abbf90e
13416d071181d83b00531455160b6a7cc899b2088/userdata","rootfs":"/var/lib/containers/storage/overlay/6dfa600c32844d8ba6c260a8b8e9ee16cf5071f6dbf1f224da16454abebfc67f/merged","created":"2021-08-10T22:50:11.763063252Z","annotations":{"controller-revision-hash":"68bd87b66","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"api\",\"kubernetes.io/config.seen\":\"2021-08-10T22:50:11.309447742Z\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-besteffort-pod26937be5_e3e9_4284_a360_e84c2f175b4f.slice","io.kubernetes.cri-o.ContainerID":"8817e7bc2fbab931abbf90e13416d071181d83b00531455160b6a7cc899b2088","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-proxy-zvm9g_kube-system_26937be5-e3e9-4284-a360-e84c2f175b4f_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-10T22:50:11.662114914Z","io.kubernetes.cri-o.HostName":"test-preload-20210810224820-30291","io.kubernetes.cri-o.HostNetwork":"true","io.kubern
etes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/8817e7bc2fbab931abbf90e13416d071181d83b00531455160b6a7cc899b2088/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-proxy-zvm9g","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-proxy-zvm9g\",\"controller-revision-hash\":\"68bd87b66\",\"pod-template-generation\":\"1\",\"k8s-app\":\"kube-proxy\",\"io.kubernetes.pod.uid\":\"26937be5-e3e9-4284-a360-e84c2f175b4f\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-zvm9g_26937be5-e3e9-4284-a360-e84c2f175b4f/8817e7bc2fbab931abbf90e13416d071181d83b00531455160b6a7cc899b2088.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy-zvm9g\",\"uid\":\"26937be5-e3e9-4284-a360-e84c2f175b4f\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/6dfa600c32844d8ba6c260a8b8e9ee
16cf5071f6dbf1f224da16454abebfc67f/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy-zvm9g_kube-system_26937be5-e3e9-4284-a360-e84c2f175b4f_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/8817e7bc2fbab931abbf90e13416d071181d83b00531455160b6a7cc899b2088/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"8817e7bc2fbab931abbf90e13416d071181d83b00531455160b6a7cc899b2088","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/8817e7bc2fbab931abbf90e13416d071181d83b00531455160b6a7cc899b2088/userdata/shm","io.kubernetes.pod.name":"kube-proxy-zvm9g","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"26937be5-e3e9-4284-a360-e84c2f175b4f","k8s-app":"kube-proxy
","kubernetes.io/config.seen":"2021-08-10T22:50:11.309447742Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-generation":"1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9029fc5bb880e165bc23810305ee69777ad51357946b544b0960039cfc7e8870","pid":3623,"status":"running","bundle":"/run/containers/storage/overlay-containers/9029fc5bb880e165bc23810305ee69777ad51357946b544b0960039cfc7e8870/userdata","rootfs":"/var/lib/containers/storage/overlay/9553f551248846252b4a5280235c236a3d419b8d9b2af11ea1e3f19c72b91b60/merged","created":"2021-08-10T22:49:47.343427526Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"589bcd22","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"589bcd22\",\"
io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"9029fc5bb880e165bc23810305ee69777ad51357946b544b0960039cfc7e8870","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-10T22:49:47.191846971Z","io.kubernetes.cri-o.Image":"5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-controller-manager:v1.17.0","io.kubernetes.cri-o.ImageRef":"5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-test-preload-20210810224820-30291\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"603b914543a305bf066dc8de01ce2232\"}","io.kuberne
tes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-test-preload-20210810224820-30291_603b914543a305bf066dc8de01ce2232/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9553f551248846252b4a5280235c236a3d419b8d9b2af11ea1e3f19c72b91b60/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-test-preload-20210810224820-30291_kube-system_603b914543a305bf066dc8de01ce2232_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/b231d093e88f22e3b7214d52cd57d587762a584f1cc852d9d3e226930f794bab/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"b231d093e88f22e3b7214d52cd57d587762a584f1cc852d9d3e226930f794bab","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-test-preload-20210810224820-30291_kube-system_603b914543a305bf066dc8de01ce2232_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.St
din":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/603b914543a305bf066dc8de01ce2232/containers/kube-controller-manager/892b298c\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/603b914543a305bf066dc8de01ce2232/etc-hosts\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugi
ns/volume/exec\",\"readonly\":false}]","io.kubernetes.pod.name":"kube-controller-manager-test-preload-20210810224820-30291","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"603b914543a305bf066dc8de01ce2232","kubernetes.io/config.hash":"603b914543a305bf066dc8de01ce2232","kubernetes.io/config.seen":"2021-08-10T22:49:44.337559972Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b1771928472232e4da441e292937469cc490a0b7a79c4ce484c7fc7cffbe8cf6","pid":3504,"status":"running","bundle":"/run/containers/storage/overlay-containers/b1771928472232e4da441e292937469cc490a0b7a79c4ce484c7fc7cffbe8cf6/userdata","rootfs":"/var/lib/containers/storage/overlay/f00b8f52632681e582ee891254d80224cc610d4804d2fb98bcce8c6c8468766d/merged","created":"2021-08-10T22:49:46.878271366Z","annotations":{"component":"etcd","io.
container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-10T22:49:44.33755471Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"23fac27a2c6b2f58cd70a4c9dc8d0da6\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-besteffort-pod23fac27a2c6b2f58cd70a4c9dc8d0da6.slice","io.kubernetes.cri-o.ContainerID":"b1771928472232e4da441e292937469cc490a0b7a79c4ce484c7fc7cffbe8cf6","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-test-preload-20210810224820-30291_kube-system_23fac27a2c6b2f58cd70a4c9dc8d0da6_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-10T22:49:46.228863135Z","io.kubernetes.cri-o.HostName":"test-preload-20210810224820-30291","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/b1771928472232e4da441e292937469cc490a0b7a79c4ce484c7fc7cffbe8cf6/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.g
cr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"etcd-test-preload-20210810224820-30291","io.kubernetes.cri-o.Labels":"{\"component\":\"etcd\",\"tier\":\"control-plane\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"23fac27a2c6b2f58cd70a4c9dc8d0da6\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"etcd-test-preload-20210810224820-30291\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-test-preload-20210810224820-30291_23fac27a2c6b2f58cd70a4c9dc8d0da6/b1771928472232e4da441e292937469cc490a0b7a79c4ce484c7fc7cffbe8cf6.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-test-preload-20210810224820-30291\",\"uid\":\"23fac27a2c6b2f58cd70a4c9dc8d0da6\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/f00b8f52632681e582ee891254d80224cc610d4804d2fb98bcce8c6c8468766d/merged","io.kubernetes.cri-o.Name":"k8s_etcd-test-preload-20210810224820-30291_kube-system_23fac27a2c6b2f58cd70a4c9dc8d0da6_0","io.kub
ernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/b1771928472232e4da441e292937469cc490a0b7a79c4ce484c7fc7cffbe8cf6/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"b1771928472232e4da441e292937469cc490a0b7a79c4ce484c7fc7cffbe8cf6","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/b1771928472232e4da441e292937469cc490a0b7a79c4ce484c7fc7cffbe8cf6/userdata/shm","io.kubernetes.pod.name":"etcd-test-preload-20210810224820-30291","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"23fac27a2c6b2f58cd70a4c9dc8d0da6","kubernetes.io/config.hash":"23fac27a2c6b2f58cd70a4c9dc8d0da6","kubernetes.io/config.seen":"2021-08-10T22:49:44.33755471Z","kubernetes.io/config.source":"
file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b231d093e88f22e3b7214d52cd57d587762a584f1cc852d9d3e226930f794bab","pid":3486,"status":"running","bundle":"/run/containers/storage/overlay-containers/b231d093e88f22e3b7214d52cd57d587762a584f1cc852d9d3e226930f794bab/userdata","rootfs":"/var/lib/containers/storage/overlay/9a3961ff020a7b2ed4c98f9846a8eb258f91fdab5ffd14decd5a0391502ac12a/merged","created":"2021-08-10T22:49:46.675311366Z","annotations":{"component":"kube-controller-manager","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-10T22:49:44.337559972Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"603b914543a305bf066dc8de01ce2232\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-pod603b914543a305bf066dc8de01ce2232.slice","io.kubernetes.cri-o.ContainerID":"b231d093e88f22e3b7214d52cd57d587762a5
84f1cc852d9d3e226930f794bab","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-test-preload-20210810224820-30291_kube-system_603b914543a305bf066dc8de01ce2232_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-10T22:49:46.198440349Z","io.kubernetes.cri-o.HostName":"test-preload-20210810224820-30291","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/b231d093e88f22e3b7214d52cd57d587762a584f1cc852d9d3e226930f794bab/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-controller-manager-test-preload-20210810224820-30291","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-controller-manager-test-preload-20210810224820-30291\",\"io.kubernetes.container.name\":\"POD\",\"tier\":\"control-plane\",\"component\":\"kube-controller-manager\",\"io.kubernetes.pod.uid\":\"603b914543a3
05bf066dc8de01ce2232\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-test-preload-20210810224820-30291_603b914543a305bf066dc8de01ce2232/b231d093e88f22e3b7214d52cd57d587762a584f1cc852d9d3e226930f794bab.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager-test-preload-20210810224820-30291\",\"uid\":\"603b914543a305bf066dc8de01ce2232\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9a3961ff020a7b2ed4c98f9846a8eb258f91fdab5ffd14decd5a0391502ac12a/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-test-preload-20210810224820-30291_kube-system_603b914543a305bf066dc8de01ce2232_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/b231d093e88f22e3b7214d52cd57d587762a58
4f1cc852d9d3e226930f794bab/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"b231d093e88f22e3b7214d52cd57d587762a584f1cc852d9d3e226930f794bab","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/b231d093e88f22e3b7214d52cd57d587762a584f1cc852d9d3e226930f794bab/userdata/shm","io.kubernetes.pod.name":"kube-controller-manager-test-preload-20210810224820-30291","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"603b914543a305bf066dc8de01ce2232","kubernetes.io/config.hash":"603b914543a305bf066dc8de01ce2232","kubernetes.io/config.seen":"2021-08-10T22:49:44.337559972Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b9f131c8976914b86f7ef77a7914c676182f1e4fa608de3ce864138068cb90c3","pid":8056,"status":"running","bundle":"/run/containers/storage/overlay-containers/b9f131c
8976914b86f7ef77a7914c676182f1e4fa608de3ce864138068cb90c3/userdata","rootfs":"/var/lib/containers/storage/overlay/caf71f5814b2788d13242c82fde194ec98a5390f2d3e83568472801c86ad98e3/merged","created":"2021-08-10T22:50:30.604915673Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"732f1ed2","io.kubernetes.container.name":"storage-provisioner","io.kubernetes.container.restartCount":"2","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"732f1ed2\",\"io.kubernetes.container.restartCount\":\"2\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"b9f131c8976914b86f7ef77a7914c676182f1e4fa608de3ce864138068cb90c3","io.kubernetes.cri-o.ContainerType":"container","io.kubern
etes.cri-o.Created":"2021-08-10T22:50:30.469540779Z","io.kubernetes.cri-o.Image":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"9540d6e2-c97c-415d-b6be-20e310fbd70f\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_9540d6e2-c97c-415d-b6be-20e310fbd70f/storage-provisioner/2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\",\"attempt\":2}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/caf71f5814b2788d13242c82fde194ec98a5390f2d3e83568472801c86ad98e3/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_storage-provisioner_kube-sy
stem_9540d6e2-c97c-415d-b6be-20e310fbd70f_2","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/5d6221697b2798582f8c5d189f1bf1b3372b25cba9c17d3ef7e345aea540944e/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"5d6221697b2798582f8c5d189f1bf1b3372b25cba9c17d3ef7e345aea540944e","io.kubernetes.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_9540d6e2-c97c-415d-b6be-20e310fbd70f_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/9540d6e2-c97c-415d-b6be-20e310fbd70f/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/9540d6e2-c97c-415d-b6be-20e310fbd70f/containers/storage-provisioner/48dc3782\",\"readonly\":false},{\"container_path\"
:\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/9540d6e2-c97c-415d-b6be-20e310fbd70f/volumes/kubernetes.io~secret/storage-provisioner-token-nkj29\",\"readonly\":true}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"9540d6e2-c97c-415d-b6be-20e310fbd70f","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provi
sioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2021-08-10T22:50:13.206951437Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c53eed59215f6778e54e6b1db239641870d0f6cbb1005d78553d81db5c7c652d","pid":3558,"status":"running","bundle":"/run/containers/storage/overlay-containers/c53eed59215f6778e54e6b1db239641870d0f6cbb1005d78553d81db5c7c652d/userdata","rootfs":"/var/lib/containers/storage/overlay/6f3fa854c4325a775260162c8f249d109ecacd41acda3cee3da2ef5d22900e03/merged","created":"2021-08-10T22:49:47.124721607Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"99930feb","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.containe
r.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"99930feb\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"c53eed59215f6778e54e6b1db239641870d0f6cbb1005d78553d81db5c7c652d","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-10T22:49:46.92790069Z","io.kubernetes.cri-o.Image":"78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-scheduler:v1.17.0","io.kubernetes.cri-o.ImageRef":"78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-test-preload-20210810224820-30291\",\"io.kubernetes.pod.namespace\":\
"kube-system\",\"io.kubernetes.pod.uid\":\"bb577061a17ad23cfbbf52e9419bf32a\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-test-preload-20210810224820-30291_bb577061a17ad23cfbbf52e9419bf32a/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/6f3fa854c4325a775260162c8f249d109ecacd41acda3cee3da2ef5d22900e03/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-test-preload-20210810224820-30291_kube-system_bb577061a17ad23cfbbf52e9419bf32a_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/8129021bbf4ea8e5b5171ea9104ac69b5e520e6eb2031086365e913888c40529/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"8129021bbf4ea8e5b5171ea9104ac69b5e520e6eb2031086365e913888c40529","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-test-preload-20210810224820-30291_kube-system_bb577061a17ad23cfbbf52e9419bf32a_0","io.kubernetes.cri-o.SeccompPr
ofilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/bb577061a17ad23cfbbf52e9419bf32a/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/bb577061a17ad23cfbbf52e9419bf32a/containers/kube-scheduler/65a5f17a\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-scheduler-test-preload-20210810224820-30291","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"bb577061a17ad23cfbbf52e9419bf32a","kubernetes.io/config.hash":"bb577061a17ad23cfbbf52e9419bf32a","kubernetes.io/config.seen":"2021-08-10T22:49:44.337549067Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactiv
e-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"dc240072d40333ee1ec6927e58f7048d1efc45f2591096267eb40a14bb891744","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/dc240072d40333ee1ec6927e58f7048d1efc45f2591096267eb40a14bb891744/userdata","rootfs":"/var/lib/containers/storage/overlay/ffd1789601ef2e4248939de182715103b25e0151a6973f583e02cde362dc07bc/merged","created":"2021-08-10T22:50:15.864442448Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"732f1ed2","io.kubernetes.container.name":"storage-provisioner","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"732f1ed2\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"
io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"dc240072d40333ee1ec6927e58f7048d1efc45f2591096267eb40a14bb891744","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-10T22:50:15.734800045Z","io.kubernetes.cri-o.Image":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"9540d6e2-c97c-415d-b6be-20e310fbd70f\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_9540d6e2-c97c-415d-b6be-20e310fbd70f/storage-provisioner/1.log","io.kubernetes.cri-o.Metadata":
"{\"name\":\"storage-provisioner\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/ffd1789601ef2e4248939de182715103b25e0151a6973f583e02cde362dc07bc/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_storage-provisioner_kube-system_9540d6e2-c97c-415d-b6be-20e310fbd70f_1","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/5d6221697b2798582f8c5d189f1bf1b3372b25cba9c17d3ef7e345aea540944e/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"5d6221697b2798582f8c5d189f1bf1b3372b25cba9c17d3ef7e345aea540944e","io.kubernetes.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_9540d6e2-c97c-415d-b6be-20e310fbd70f_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/po
ds/9540d6e2-c97c-415d-b6be-20e310fbd70f/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/9540d6e2-c97c-415d-b6be-20e310fbd70f/containers/storage-provisioner/ebaeaadb\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/9540d6e2-c97c-415d-b6be-20e310fbd70f/volumes/kubernetes.io~secret/storage-provisioner-token-nkj29\",\"readonly\":true}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"9540d6e2-c97c-415d-b6be-20e310fbd70f","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"
/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2021-08-10T22:50:13.206951437Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"edde24c1a9c539830d810107febdd0ceaffbf9652c9cd6fa9b4faa07af471a26","pid":3696,"status":"running","bundle":"/run/containers/storage/overlay-containers/edde24c1a9c539830d810107febdd0ceaffbf9652c9cd6fa9b4faa07af471a26/userdata","rootfs":"/var/lib/containers/storage/overlay/56455c6d4362ee7afe1ba8a269e7c0d0ad183b4de99a003f58ce48ee170a8048/merged","created":"2021-08-10T22:49:48.431831211Z","
annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"f1d80633","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"f1d80633\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"edde24c1a9c539830d810107febdd0ceaffbf9652c9cd6fa9b4faa07af471a26","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-10T22:49:48.302871425Z","io.kubernetes.cri-o.Image":"303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/etcd:3.4.3-0","io.kubernetes.cri-o.ImageRef":"303ce5db
0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-test-preload-20210810224820-30291\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"23fac27a2c6b2f58cd70a4c9dc8d0da6\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-test-preload-20210810224820-30291_23fac27a2c6b2f58cd70a4c9dc8d0da6/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/56455c6d4362ee7afe1ba8a269e7c0d0ad183b4de99a003f58ce48ee170a8048/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-test-preload-20210810224820-30291_kube-system_23fac27a2c6b2f58cd70a4c9dc8d0da6_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/b1771928472232e4da441e292937469cc490a0b7a79c4ce484c7fc7cffbe8cf6/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"b1771928472232e4da441e292937469cc490a0b7a79c4ce4
84c7fc7cffbe8cf6","io.kubernetes.cri-o.SandboxName":"k8s_etcd-test-preload-20210810224820-30291_kube-system_23fac27a2c6b2f58cd70a4c9dc8d0da6_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/23fac27a2c6b2f58cd70a4c9dc8d0da6/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/23fac27a2c6b2f58cd70a4c9dc8d0da6/containers/etcd/0d144484\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false}]","io.kubernetes.pod.name":"etcd-test-preload-20210810224820-30291","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.k
ubernetes.pod.uid":"23fac27a2c6b2f58cd70a4c9dc8d0da6","kubernetes.io/config.hash":"23fac27a2c6b2f58cd70a4c9dc8d0da6","kubernetes.io/config.seen":"2021-08-10T22:49:44.33755471Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f379dea9d5ffcac7e24e23f2e0dab8b1fc4800f05db9951983bce94277e3379a","pid":3476,"status":"running","bundle":"/run/containers/storage/overlay-containers/f379dea9d5ffcac7e24e23f2e0dab8b1fc4800f05db9951983bce94277e3379a/userdata","rootfs":"/var/lib/containers/storage/overlay/629c750496deb8d6b9f5bfc59488cda821679771e271a92ce64dd02627506e9a/merged","created":"2021-08-10T22:49:46.484867917Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-10T22:49:44.337557499Z\",\"kubernetes.io/config.source\":\"file\",\"kube
rnetes.io/config.hash\":\"ef1fd623bd23c3b480f66cfc795fa0ab\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-podef1fd623bd23c3b480f66cfc795fa0ab.slice","io.kubernetes.cri-o.ContainerID":"f379dea9d5ffcac7e24e23f2e0dab8b1fc4800f05db9951983bce94277e3379a","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-test-preload-20210810224820-30291_kube-system_ef1fd623bd23c3b480f66cfc795fa0ab_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-10T22:49:46.210894743Z","io.kubernetes.cri-o.HostName":"test-preload-20210810224820-30291","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/f379dea9d5ffcac7e24e23f2e0dab8b1fc4800f05db9951983bce94277e3379a/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-apiserver-test-preload-20210810224820-30291","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"ef1
fd623bd23c3b480f66cfc795fa0ab\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-apiserver-test-preload-20210810224820-30291\",\"tier\":\"control-plane\",\"component\":\"kube-apiserver\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-test-preload-20210810224820-30291_ef1fd623bd23c3b480f66cfc795fa0ab/f379dea9d5ffcac7e24e23f2e0dab8b1fc4800f05db9951983bce94277e3379a.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver-test-preload-20210810224820-30291\",\"uid\":\"ef1fd623bd23c3b480f66cfc795fa0ab\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/629c750496deb8d6b9f5bfc59488cda821679771e271a92ce64dd02627506e9a/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver-test-preload-20210810224820-30291_kube-system_ef1fd623bd23c3b480f66cfc795fa0ab_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"
[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/f379dea9d5ffcac7e24e23f2e0dab8b1fc4800f05db9951983bce94277e3379a/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"f379dea9d5ffcac7e24e23f2e0dab8b1fc4800f05db9951983bce94277e3379a","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/f379dea9d5ffcac7e24e23f2e0dab8b1fc4800f05db9951983bce94277e3379a/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-test-preload-20210810224820-30291","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"ef1fd623bd23c3b480f66cfc795fa0ab","kubernetes.io/config.hash":"ef1fd623bd23c3b480f66cfc795fa0ab","kubernetes.io/config.seen":"2021-08-10T22:49:44.337557499Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"}]
	I0810 22:50:50.774057   32288 cri.go:113] list returned 15 containers
	I0810 22:50:50.774077   32288 cri.go:116] container: {ID:04efcb33acc3b6322ec09f6ab03cc72fb4eef3851be315661768b2c28b7580ca Status:running}
	I0810 22:50:50.774089   32288 cri.go:122] skipping {04efcb33acc3b6322ec09f6ab03cc72fb4eef3851be315661768b2c28b7580ca running}: state = "running", want "paused"
	I0810 22:50:50.774099   32288 cri.go:116] container: {ID:41003a20744ba44844bac954b01f50201088f18d7455625552f316ad5bf12024 Status:running}
	I0810 22:50:50.774104   32288 cri.go:122] skipping {41003a20744ba44844bac954b01f50201088f18d7455625552f316ad5bf12024 running}: state = "running", want "paused"
	I0810 22:50:50.774109   32288 cri.go:116] container: {ID:4e1d34a786ab389aa821b4f9c92713d5556d250d23142b50dfc40e1fbc8965c4 Status:running}
	I0810 22:50:50.774114   32288 cri.go:122] skipping {4e1d34a786ab389aa821b4f9c92713d5556d250d23142b50dfc40e1fbc8965c4 running}: state = "running", want "paused"
	I0810 22:50:50.774120   32288 cri.go:116] container: {ID:51ebf16de90b214fe88bd71ea32d688a4e2b2b7b9c164da070627206dd35a1da Status:running}
	I0810 22:50:50.774124   32288 cri.go:118] skipping 51ebf16de90b214fe88bd71ea32d688a4e2b2b7b9c164da070627206dd35a1da - not in ps
	I0810 22:50:50.774129   32288 cri.go:116] container: {ID:5d6221697b2798582f8c5d189f1bf1b3372b25cba9c17d3ef7e345aea540944e Status:running}
	I0810 22:50:50.774134   32288 cri.go:118] skipping 5d6221697b2798582f8c5d189f1bf1b3372b25cba9c17d3ef7e345aea540944e - not in ps
	I0810 22:50:50.774137   32288 cri.go:116] container: {ID:8129021bbf4ea8e5b5171ea9104ac69b5e520e6eb2031086365e913888c40529 Status:running}
	I0810 22:50:50.774143   32288 cri.go:118] skipping 8129021bbf4ea8e5b5171ea9104ac69b5e520e6eb2031086365e913888c40529 - not in ps
	I0810 22:50:50.774147   32288 cri.go:116] container: {ID:8817e7bc2fbab931abbf90e13416d071181d83b00531455160b6a7cc899b2088 Status:running}
	I0810 22:50:50.774151   32288 cri.go:118] skipping 8817e7bc2fbab931abbf90e13416d071181d83b00531455160b6a7cc899b2088 - not in ps
	I0810 22:50:50.774155   32288 cri.go:116] container: {ID:9029fc5bb880e165bc23810305ee69777ad51357946b544b0960039cfc7e8870 Status:running}
	I0810 22:50:50.774161   32288 cri.go:122] skipping {9029fc5bb880e165bc23810305ee69777ad51357946b544b0960039cfc7e8870 running}: state = "running", want "paused"
	I0810 22:50:50.774165   32288 cri.go:116] container: {ID:b1771928472232e4da441e292937469cc490a0b7a79c4ce484c7fc7cffbe8cf6 Status:running}
	I0810 22:50:50.774169   32288 cri.go:118] skipping b1771928472232e4da441e292937469cc490a0b7a79c4ce484c7fc7cffbe8cf6 - not in ps
	I0810 22:50:50.774177   32288 cri.go:116] container: {ID:b231d093e88f22e3b7214d52cd57d587762a584f1cc852d9d3e226930f794bab Status:running}
	I0810 22:50:50.774181   32288 cri.go:118] skipping b231d093e88f22e3b7214d52cd57d587762a584f1cc852d9d3e226930f794bab - not in ps
	I0810 22:50:50.774185   32288 cri.go:116] container: {ID:b9f131c8976914b86f7ef77a7914c676182f1e4fa608de3ce864138068cb90c3 Status:running}
	I0810 22:50:50.774189   32288 cri.go:122] skipping {b9f131c8976914b86f7ef77a7914c676182f1e4fa608de3ce864138068cb90c3 running}: state = "running", want "paused"
	I0810 22:50:50.774194   32288 cri.go:116] container: {ID:c53eed59215f6778e54e6b1db239641870d0f6cbb1005d78553d81db5c7c652d Status:running}
	I0810 22:50:50.774198   32288 cri.go:122] skipping {c53eed59215f6778e54e6b1db239641870d0f6cbb1005d78553d81db5c7c652d running}: state = "running", want "paused"
	I0810 22:50:50.774203   32288 cri.go:116] container: {ID:dc240072d40333ee1ec6927e58f7048d1efc45f2591096267eb40a14bb891744 Status:stopped}
	I0810 22:50:50.774207   32288 cri.go:122] skipping {dc240072d40333ee1ec6927e58f7048d1efc45f2591096267eb40a14bb891744 stopped}: state = "stopped", want "paused"
	I0810 22:50:50.774211   32288 cri.go:116] container: {ID:edde24c1a9c539830d810107febdd0ceaffbf9652c9cd6fa9b4faa07af471a26 Status:running}
	I0810 22:50:50.774215   32288 cri.go:122] skipping {edde24c1a9c539830d810107febdd0ceaffbf9652c9cd6fa9b4faa07af471a26 running}: state = "running", want "paused"
	I0810 22:50:50.774220   32288 cri.go:116] container: {ID:f379dea9d5ffcac7e24e23f2e0dab8b1fc4800f05db9951983bce94277e3379a Status:running}
	I0810 22:50:50.774225   32288 cri.go:118] skipping f379dea9d5ffcac7e24e23f2e0dab8b1fc4800f05db9951983bce94277e3379a - not in ps
	I0810 22:50:50.774266   32288 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0810 22:50:50.781487   32288 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0810 22:50:50.781509   32288 kubeadm.go:600] restartCluster start
	I0810 22:50:50.781559   32288 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0810 22:50:50.788409   32288 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0810 22:50:50.789133   32288 kubeconfig.go:93] found "test-preload-20210810224820-30291" server: "https://192.168.50.38:8443"
	I0810 22:50:50.789591   32288 kapi.go:59] client config for test-preload-20210810224820-30291: &rest.Config{Host:"https://192.168.50.38:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/test-preload-20210810224820-30291/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/test-preload-202108102
24820-30291/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2660), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0810 22:50:50.791079   32288 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0810 22:50:50.797775   32288 kubeadm.go:568] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -40,7 +40,7 @@
	     dataDir: /var/lib/minikube/etcd
	     extraArgs:
	       proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.17.0
	+kubernetesVersion: v1.17.3
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I0810 22:50:50.797795   32288 kubeadm.go:1032] stopping kube-system containers ...
	I0810 22:50:50.797811   32288 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0810 22:50:50.797855   32288 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0810 22:50:50.834368   32288 cri.go:76] found id: "b9f131c8976914b86f7ef77a7914c676182f1e4fa608de3ce864138068cb90c3"
	I0810 22:50:50.834388   32288 cri.go:76] found id: "dc240072d40333ee1ec6927e58f7048d1efc45f2591096267eb40a14bb891744"
	I0810 22:50:50.834392   32288 cri.go:76] found id: "41003a20744ba44844bac954b01f50201088f18d7455625552f316ad5bf12024"
	I0810 22:50:50.834397   32288 cri.go:76] found id: "4e1d34a786ab389aa821b4f9c92713d5556d250d23142b50dfc40e1fbc8965c4"
	I0810 22:50:50.834400   32288 cri.go:76] found id: "edde24c1a9c539830d810107febdd0ceaffbf9652c9cd6fa9b4faa07af471a26"
	I0810 22:50:50.834404   32288 cri.go:76] found id: "9029fc5bb880e165bc23810305ee69777ad51357946b544b0960039cfc7e8870"
	I0810 22:50:50.834408   32288 cri.go:76] found id: "04efcb33acc3b6322ec09f6ab03cc72fb4eef3851be315661768b2c28b7580ca"
	I0810 22:50:50.834412   32288 cri.go:76] found id: "c53eed59215f6778e54e6b1db239641870d0f6cbb1005d78553d81db5c7c652d"
	I0810 22:50:50.834415   32288 cri.go:76] found id: ""
	I0810 22:50:50.834422   32288 cri.go:221] Stopping containers: [b9f131c8976914b86f7ef77a7914c676182f1e4fa608de3ce864138068cb90c3 dc240072d40333ee1ec6927e58f7048d1efc45f2591096267eb40a14bb891744 41003a20744ba44844bac954b01f50201088f18d7455625552f316ad5bf12024 4e1d34a786ab389aa821b4f9c92713d5556d250d23142b50dfc40e1fbc8965c4 edde24c1a9c539830d810107febdd0ceaffbf9652c9cd6fa9b4faa07af471a26 9029fc5bb880e165bc23810305ee69777ad51357946b544b0960039cfc7e8870 04efcb33acc3b6322ec09f6ab03cc72fb4eef3851be315661768b2c28b7580ca c53eed59215f6778e54e6b1db239641870d0f6cbb1005d78553d81db5c7c652d]
	I0810 22:50:50.834461   32288 ssh_runner.go:149] Run: which crictl
	I0810 22:50:50.839303   32288 ssh_runner.go:149] Run: sudo /bin/crictl stop b9f131c8976914b86f7ef77a7914c676182f1e4fa608de3ce864138068cb90c3 dc240072d40333ee1ec6927e58f7048d1efc45f2591096267eb40a14bb891744 41003a20744ba44844bac954b01f50201088f18d7455625552f316ad5bf12024 4e1d34a786ab389aa821b4f9c92713d5556d250d23142b50dfc40e1fbc8965c4 edde24c1a9c539830d810107febdd0ceaffbf9652c9cd6fa9b4faa07af471a26 9029fc5bb880e165bc23810305ee69777ad51357946b544b0960039cfc7e8870 04efcb33acc3b6322ec09f6ab03cc72fb4eef3851be315661768b2c28b7580ca c53eed59215f6778e54e6b1db239641870d0f6cbb1005d78553d81db5c7c652d
	I0810 22:50:52.900828   32288 ssh_runner.go:189] Completed: sudo /bin/crictl stop b9f131c8976914b86f7ef77a7914c676182f1e4fa608de3ce864138068cb90c3 dc240072d40333ee1ec6927e58f7048d1efc45f2591096267eb40a14bb891744 41003a20744ba44844bac954b01f50201088f18d7455625552f316ad5bf12024 4e1d34a786ab389aa821b4f9c92713d5556d250d23142b50dfc40e1fbc8965c4 edde24c1a9c539830d810107febdd0ceaffbf9652c9cd6fa9b4faa07af471a26 9029fc5bb880e165bc23810305ee69777ad51357946b544b0960039cfc7e8870 04efcb33acc3b6322ec09f6ab03cc72fb4eef3851be315661768b2c28b7580ca c53eed59215f6778e54e6b1db239641870d0f6cbb1005d78553d81db5c7c652d: (2.06147916s)
	I0810 22:50:52.900917   32288 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0810 22:50:52.913186   32288 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0810 22:50:52.920401   32288 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5611 Aug 10 22:49 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5647 Aug 10 22:49 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2071 Aug 10 22:49 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5595 Aug 10 22:49 /etc/kubernetes/scheduler.conf
	
	I0810 22:50:52.920470   32288 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0810 22:50:52.926893   32288 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0810 22:50:52.932959   32288 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0810 22:50:52.938935   32288 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0810 22:50:52.945107   32288 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0810 22:50:52.951687   32288 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0810 22:50:52.951704   32288 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.3:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0810 22:50:53.023503   32288 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0810 22:50:53.876580   32288 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.3:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0810 22:50:54.158522   32288 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.3:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0810 22:50:54.269709   32288 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.3:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0810 22:50:54.405558   32288 api_server.go:50] waiting for apiserver process to appear ...
	I0810 22:50:54.405638   32288 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0810 22:50:54.918140   32288 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0810 22:50:55.418052   32288 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0810 22:50:55.917860   32288 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0810 22:50:56.418176   32288 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0810 22:50:56.440378   32288 api_server.go:70] duration metric: took 2.034823001s to wait for apiserver process to appear ...
	I0810 22:50:56.440404   32288 api_server.go:86] waiting for apiserver healthz status ...
	I0810 22:50:56.440417   32288 api_server.go:239] Checking apiserver healthz at https://192.168.50.38:8443/healthz ...
	I0810 22:50:56.441163   32288 api_server.go:255] stopped: https://192.168.50.38:8443/healthz: Get "https://192.168.50.38:8443/healthz": dial tcp 192.168.50.38:8443: connect: connection refused
	I0810 22:50:56.942173   32288 api_server.go:239] Checking apiserver healthz at https://192.168.50.38:8443/healthz ...
	I0810 22:51:01.054461   32288 api_server.go:265] https://192.168.50.38:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0810 22:51:01.054491   32288 api_server.go:101] status: https://192.168.50.38:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0810 22:51:01.442016   32288 api_server.go:239] Checking apiserver healthz at https://192.168.50.38:8443/healthz ...
	I0810 22:51:01.448714   32288 api_server.go:265] https://192.168.50.38:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0810 22:51:01.448737   32288 api_server.go:101] status: https://192.168.50.38:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0810 22:51:01.941557   32288 api_server.go:239] Checking apiserver healthz at https://192.168.50.38:8443/healthz ...
	I0810 22:51:01.950061   32288 api_server.go:265] https://192.168.50.38:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0810 22:51:01.950083   32288 api_server.go:101] status: https://192.168.50.38:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0810 22:51:02.442065   32288 api_server.go:239] Checking apiserver healthz at https://192.168.50.38:8443/healthz ...
	I0810 22:51:02.454395   32288 api_server.go:265] https://192.168.50.38:8443/healthz returned 200:
	ok
	I0810 22:51:02.468361   32288 api_server.go:139] control plane version: v1.17.3
	I0810 22:51:02.468390   32288 api_server.go:129] duration metric: took 6.027978307s to wait for apiserver health ...
	I0810 22:51:02.468405   32288 cni.go:93] Creating CNI manager for ""
	I0810 22:51:02.468414   32288 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0810 22:51:02.470604   32288 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0810 22:51:02.470684   32288 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0810 22:51:02.479826   32288 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0810 22:51:02.504345   32288 system_pods.go:43] waiting for kube-system pods to appear ...
	I0810 22:51:02.520880   32288 system_pods.go:59] 7 kube-system pods found
	I0810 22:51:02.520908   32288 system_pods.go:61] "coredns-6955765f44-9gbh9" [8abdfb71-f1fb-4e43-a76f-55549541d3f5] Running
	I0810 22:51:02.520920   32288 system_pods.go:61] "etcd-test-preload-20210810224820-30291" [e3b8cc33-8dbf-4dae-b2c5-b439ff424800] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0810 22:51:02.520929   32288 system_pods.go:61] "kube-apiserver-test-preload-20210810224820-30291" [6de16671-b0ca-4995-9c11-df083708513c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0810 22:51:02.520939   32288 system_pods.go:61] "kube-controller-manager-test-preload-20210810224820-30291" [9b8c0023-5ff8-42f9-95c6-c371dd70be5a] Pending
	I0810 22:51:02.520946   32288 system_pods.go:61] "kube-proxy-zvm9g" [26937be5-e3e9-4284-a360-e84c2f175b4f] Running
	I0810 22:51:02.520952   32288 system_pods.go:61] "kube-scheduler-test-preload-20210810224820-30291" [ad95c3c8-cfb6-4e03-8889-719129d9ee5d] Pending
	I0810 22:51:02.520958   32288 system_pods.go:61] "storage-provisioner" [9540d6e2-c97c-415d-b6be-20e310fbd70f] Running
	I0810 22:51:02.520966   32288 system_pods.go:74] duration metric: took 16.597602ms to wait for pod list to return data ...
	I0810 22:51:02.520978   32288 node_conditions.go:102] verifying NodePressure condition ...
	I0810 22:51:02.527478   32288 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0810 22:51:02.527508   32288 node_conditions.go:123] node cpu capacity is 2
	I0810 22:51:02.527524   32288 node_conditions.go:105] duration metric: took 6.538917ms to run NodePressure ...
	I0810 22:51:02.527545   32288 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.3:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0810 22:51:03.129621   32288 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0810 22:51:03.132931   32288 kubeadm.go:746] kubelet initialised
	I0810 22:51:03.132950   32288 kubeadm.go:747] duration metric: took 3.303132ms waiting for restarted kubelet to initialise ...
	I0810 22:51:03.132960   32288 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0810 22:51:03.138587   32288 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6955765f44-9gbh9" in "kube-system" namespace to be "Ready" ...
	I0810 22:51:03.148334   32288 pod_ready.go:92] pod "coredns-6955765f44-9gbh9" in "kube-system" namespace has status "Ready":"True"
	I0810 22:51:03.148351   32288 pod_ready.go:81] duration metric: took 9.743977ms waiting for pod "coredns-6955765f44-9gbh9" in "kube-system" namespace to be "Ready" ...
	I0810 22:51:03.148359   32288 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-20210810224820-30291" in "kube-system" namespace to be "Ready" ...
	I0810 22:51:04.672934   32288 pod_ready.go:92] pod "etcd-test-preload-20210810224820-30291" in "kube-system" namespace has status "Ready":"True"
	I0810 22:51:04.672972   32288 pod_ready.go:81] duration metric: took 1.524605621s waiting for pod "etcd-test-preload-20210810224820-30291" in "kube-system" namespace to be "Ready" ...
	I0810 22:51:04.672987   32288 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-20210810224820-30291" in "kube-system" namespace to be "Ready" ...
	I0810 22:51:04.679074   32288 pod_ready.go:92] pod "kube-apiserver-test-preload-20210810224820-30291" in "kube-system" namespace has status "Ready":"True"
	I0810 22:51:04.679090   32288 pod_ready.go:81] duration metric: took 6.093511ms waiting for pod "kube-apiserver-test-preload-20210810224820-30291" in "kube-system" namespace to be "Ready" ...
	I0810 22:51:04.679102   32288 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-20210810224820-30291" in "kube-system" namespace to be "Ready" ...
	I0810 22:51:04.685543   32288 pod_ready.go:92] pod "kube-controller-manager-test-preload-20210810224820-30291" in "kube-system" namespace has status "Ready":"True"
	I0810 22:51:04.685557   32288 pod_ready.go:81] duration metric: took 6.447451ms waiting for pod "kube-controller-manager-test-preload-20210810224820-30291" in "kube-system" namespace to be "Ready" ...
	I0810 22:51:04.685568   32288 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zvm9g" in "kube-system" namespace to be "Ready" ...
	I0810 22:51:04.734803   32288 pod_ready.go:92] pod "kube-proxy-zvm9g" in "kube-system" namespace has status "Ready":"True"
	I0810 22:51:04.734824   32288 pod_ready.go:81] duration metric: took 49.248694ms waiting for pod "kube-proxy-zvm9g" in "kube-system" namespace to be "Ready" ...
	I0810 22:51:04.734835   32288 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-20210810224820-30291" in "kube-system" namespace to be "Ready" ...
	I0810 22:51:05.134361   32288 pod_ready.go:92] pod "kube-scheduler-test-preload-20210810224820-30291" in "kube-system" namespace has status "Ready":"True"
	I0810 22:51:05.134390   32288 pod_ready.go:81] duration metric: took 399.545467ms waiting for pod "kube-scheduler-test-preload-20210810224820-30291" in "kube-system" namespace to be "Ready" ...
	I0810 22:51:05.134405   32288 pod_ready.go:38] duration metric: took 2.001431718s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0810 22:51:05.134429   32288 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0810 22:51:05.147469   32288 ops.go:34] apiserver oom_adj: -16
	I0810 22:51:05.147495   32288 kubeadm.go:604] restartCluster took 14.36597454s
	I0810 22:51:05.147501   32288 kubeadm.go:392] StartCluster complete in 14.451761666s
	I0810 22:51:05.147518   32288 settings.go:142] acquiring lock: {Name:mk9de8b97604ec8ec02e9734983b03b6308517c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:51:05.147629   32288 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0810 22:51:05.148391   32288 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig: {Name:mkb7fc7bcea695301999150daa705ac3e8a4c8a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:51:05.149046   32288 kapi.go:59] client config for test-preload-20210810224820-30291: &rest.Config{Host:"https://192.168.50.38:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/test-preload-20210810224820-30291/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/test-preload-202108102
24820-30291/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2660), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0810 22:51:05.676183   32288 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "test-preload-20210810224820-30291" rescaled to 1
	I0810 22:51:05.676266   32288 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0810 22:51:05.676305   32288 addons.go:342] enableAddons start: toEnable=map[default-storageclass:true storage-provisioner:true], additional=[]
	I0810 22:51:05.676272   32288 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.50.38 Port:8443 KubernetesVersion:v1.17.3 ControlPlane:true Worker:true}
	I0810 22:51:05.676405   32288 addons.go:59] Setting storage-provisioner=true in profile "test-preload-20210810224820-30291"
	I0810 22:51:05.678296   32288 out.go:177] * Verifying Kubernetes components...
	I0810 22:51:05.678354   32288 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0810 22:51:05.676436   32288 addons.go:135] Setting addon storage-provisioner=true in "test-preload-20210810224820-30291"
	I0810 22:51:05.676406   32288 addons.go:59] Setting default-storageclass=true in profile "test-preload-20210810224820-30291"
	I0810 22:51:05.678387   32288 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-20210810224820-30291"
	W0810 22:51:05.678411   32288 addons.go:147] addon storage-provisioner should already be in state true
	I0810 22:51:05.678470   32288 host.go:66] Checking if "test-preload-20210810224820-30291" exists ...
	I0810 22:51:05.678803   32288 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:51:05.678850   32288 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:51:05.678883   32288 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:51:05.678930   32288 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:51:05.690526   32288 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:38153
	I0810 22:51:05.690656   32288 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:32793
	I0810 22:51:05.691026   32288 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:51:05.691057   32288 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:51:05.691556   32288 main.go:130] libmachine: Using API Version  1
	I0810 22:51:05.691590   32288 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:51:05.691631   32288 main.go:130] libmachine: Using API Version  1
	I0810 22:51:05.691648   32288 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:51:05.692006   32288 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:51:05.692033   32288 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:51:05.692184   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .GetState
	I0810 22:51:05.692517   32288 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:51:05.692551   32288 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:51:05.696064   32288 kapi.go:59] client config for test-preload-20210810224820-30291: &rest.Config{Host:"https://192.168.50.38:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/test-preload-20210810224820-30291/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/test-preload-202108102
24820-30291/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2660), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0810 22:51:05.703280   32288 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43995
	I0810 22:51:05.703714   32288 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:51:05.704209   32288 main.go:130] libmachine: Using API Version  1
	I0810 22:51:05.704236   32288 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:51:05.704560   32288 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:51:05.704762   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .GetState
	I0810 22:51:05.708059   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .DriverName
	I0810 22:51:05.710184   32288 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0810 22:51:05.710310   32288 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0810 22:51:05.710323   32288 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0810 22:51:05.710339   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .GetSSHHostname
	I0810 22:51:05.712625   32288 addons.go:135] Setting addon default-storageclass=true in "test-preload-20210810224820-30291"
	W0810 22:51:05.712649   32288 addons.go:147] addon default-storageclass should already be in state true
	I0810 22:51:05.712686   32288 host.go:66] Checking if "test-preload-20210810224820-30291" exists ...
	I0810 22:51:05.713104   32288 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:51:05.713153   32288 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:51:05.715922   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) DBG | domain test-preload-20210810224820-30291 has defined MAC address 52:54:00:70:8b:73 in network mk-test-preload-20210810224820-30291
	I0810 22:51:05.716361   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:8b:73", ip: ""} in network mk-test-preload-20210810224820-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:48:38 +0000 UTC Type:0 Mac:52:54:00:70:8b:73 Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:test-preload-20210810224820-30291 Clientid:01:52:54:00:70:8b:73}
	I0810 22:51:05.716393   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) DBG | domain test-preload-20210810224820-30291 has defined IP address 192.168.50.38 and MAC address 52:54:00:70:8b:73 in network mk-test-preload-20210810224820-30291
	I0810 22:51:05.716550   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .GetSSHPort
	I0810 22:51:05.716731   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .GetSSHKeyPath
	I0810 22:51:05.716903   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .GetSSHUsername
	I0810 22:51:05.717063   32288 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/test-preload-20210810224820-30291/id_rsa Username:docker}
	I0810 22:51:05.724467   32288 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35501
	I0810 22:51:05.724853   32288 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:51:05.725338   32288 main.go:130] libmachine: Using API Version  1
	I0810 22:51:05.725358   32288 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:51:05.725698   32288 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:51:05.726180   32288 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:51:05.726221   32288 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:51:05.736508   32288 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41969
	I0810 22:51:05.736858   32288 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:51:05.737277   32288 main.go:130] libmachine: Using API Version  1
	I0810 22:51:05.737295   32288 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:51:05.737610   32288 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:51:05.737842   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .GetState
	I0810 22:51:05.740835   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .DriverName
	I0810 22:51:05.741065   32288 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0810 22:51:05.741079   32288 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0810 22:51:05.741094   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .GetSSHHostname
	I0810 22:51:05.745981   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) DBG | domain test-preload-20210810224820-30291 has defined MAC address 52:54:00:70:8b:73 in network mk-test-preload-20210810224820-30291
	I0810 22:51:05.746382   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:8b:73", ip: ""} in network mk-test-preload-20210810224820-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:48:38 +0000 UTC Type:0 Mac:52:54:00:70:8b:73 Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:test-preload-20210810224820-30291 Clientid:01:52:54:00:70:8b:73}
	I0810 22:51:05.746408   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) DBG | domain test-preload-20210810224820-30291 has defined IP address 192.168.50.38 and MAC address 52:54:00:70:8b:73 in network mk-test-preload-20210810224820-30291
	I0810 22:51:05.746559   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .GetSSHPort
	I0810 22:51:05.746703   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .GetSSHKeyPath
	I0810 22:51:05.746839   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .GetSSHUsername
	I0810 22:51:05.746939   32288 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/test-preload-20210810224820-30291/id_rsa Username:docker}
	I0810 22:51:05.810841   32288 start.go:716] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0810 22:51:05.810873   32288 node_ready.go:35] waiting up to 6m0s for node "test-preload-20210810224820-30291" to be "Ready" ...
	I0810 22:51:05.814333   32288 node_ready.go:49] node "test-preload-20210810224820-30291" has status "Ready":"True"
	I0810 22:51:05.814348   32288 node_ready.go:38] duration metric: took 3.452787ms waiting for node "test-preload-20210810224820-30291" to be "Ready" ...
	I0810 22:51:05.814356   32288 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0810 22:51:05.820705   32288 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6955765f44-9gbh9" in "kube-system" namespace to be "Ready" ...
	I0810 22:51:05.843826   32288 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.17.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0810 22:51:05.849650   32288 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.17.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0810 22:51:05.934483   32288 pod_ready.go:92] pod "coredns-6955765f44-9gbh9" in "kube-system" namespace has status "Ready":"True"
	I0810 22:51:05.934514   32288 pod_ready.go:81] duration metric: took 113.787473ms waiting for pod "coredns-6955765f44-9gbh9" in "kube-system" namespace to be "Ready" ...
	I0810 22:51:05.934529   32288 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-20210810224820-30291" in "kube-system" namespace to be "Ready" ...
	I0810 22:51:06.192677   32288 main.go:130] libmachine: Making call to close driver server
	I0810 22:51:06.192712   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .Close
	I0810 22:51:06.193001   32288 main.go:130] libmachine: Successfully made call to close driver server
	I0810 22:51:06.193024   32288 main.go:130] libmachine: Making call to close connection to plugin binary
	I0810 22:51:06.193039   32288 main.go:130] libmachine: Making call to close driver server
	I0810 22:51:06.193046   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) DBG | Closing plugin on server side
	I0810 22:51:06.193049   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .Close
	I0810 22:51:06.193303   32288 main.go:130] libmachine: Successfully made call to close driver server
	I0810 22:51:06.193339   32288 main.go:130] libmachine: Making call to close connection to plugin binary
	I0810 22:51:06.193390   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) DBG | Closing plugin on server side
	I0810 22:51:06.194698   32288 main.go:130] libmachine: Making call to close driver server
	I0810 22:51:06.194715   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .Close
	I0810 22:51:06.194916   32288 main.go:130] libmachine: Successfully made call to close driver server
	I0810 22:51:06.194923   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) DBG | Closing plugin on server side
	I0810 22:51:06.194932   32288 main.go:130] libmachine: Making call to close connection to plugin binary
	I0810 22:51:06.194942   32288 main.go:130] libmachine: Making call to close driver server
	I0810 22:51:06.194952   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .Close
	I0810 22:51:06.195175   32288 main.go:130] libmachine: Successfully made call to close driver server
	I0810 22:51:06.195196   32288 main.go:130] libmachine: Making call to close connection to plugin binary
	I0810 22:51:06.195199   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) DBG | Closing plugin on server side
	I0810 22:51:06.195211   32288 main.go:130] libmachine: Making call to close driver server
	I0810 22:51:06.195230   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) Calling .Close
	I0810 22:51:06.195434   32288 main.go:130] libmachine: Successfully made call to close driver server
	I0810 22:51:06.195447   32288 main.go:130] libmachine: Making call to close connection to plugin binary
	I0810 22:51:06.195455   32288 main.go:130] libmachine: (test-preload-20210810224820-30291) DBG | Closing plugin on server side
	I0810 22:51:06.197540   32288 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0810 22:51:06.197560   32288 addons.go:344] enableAddons completed in 521.262308ms
	I0810 22:51:06.334611   32288 pod_ready.go:92] pod "etcd-test-preload-20210810224820-30291" in "kube-system" namespace has status "Ready":"True"
	I0810 22:51:06.334634   32288 pod_ready.go:81] duration metric: took 400.096232ms waiting for pod "etcd-test-preload-20210810224820-30291" in "kube-system" namespace to be "Ready" ...
	I0810 22:51:06.334646   32288 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-20210810224820-30291" in "kube-system" namespace to be "Ready" ...
	I0810 22:51:06.733849   32288 pod_ready.go:92] pod "kube-apiserver-test-preload-20210810224820-30291" in "kube-system" namespace has status "Ready":"True"
	I0810 22:51:06.733874   32288 pod_ready.go:81] duration metric: took 399.220931ms waiting for pod "kube-apiserver-test-preload-20210810224820-30291" in "kube-system" namespace to be "Ready" ...
	I0810 22:51:06.733886   32288 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-20210810224820-30291" in "kube-system" namespace to be "Ready" ...
	I0810 22:51:07.134809   32288 pod_ready.go:92] pod "kube-controller-manager-test-preload-20210810224820-30291" in "kube-system" namespace has status "Ready":"True"
	I0810 22:51:07.134833   32288 pod_ready.go:81] duration metric: took 400.940457ms waiting for pod "kube-controller-manager-test-preload-20210810224820-30291" in "kube-system" namespace to be "Ready" ...
	I0810 22:51:07.134844   32288 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zvm9g" in "kube-system" namespace to be "Ready" ...
	I0810 22:51:07.533763   32288 pod_ready.go:92] pod "kube-proxy-zvm9g" in "kube-system" namespace has status "Ready":"True"
	I0810 22:51:07.533786   32288 pod_ready.go:81] duration metric: took 398.935547ms waiting for pod "kube-proxy-zvm9g" in "kube-system" namespace to be "Ready" ...
	I0810 22:51:07.533796   32288 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-20210810224820-30291" in "kube-system" namespace to be "Ready" ...
	I0810 22:51:07.934133   32288 pod_ready.go:92] pod "kube-scheduler-test-preload-20210810224820-30291" in "kube-system" namespace has status "Ready":"True"
	I0810 22:51:07.934158   32288 pod_ready.go:81] duration metric: took 400.354616ms waiting for pod "kube-scheduler-test-preload-20210810224820-30291" in "kube-system" namespace to be "Ready" ...
	I0810 22:51:07.934171   32288 pod_ready.go:38] duration metric: took 2.119805672s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0810 22:51:07.934188   32288 api_server.go:50] waiting for apiserver process to appear ...
	I0810 22:51:07.934233   32288 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0810 22:51:07.944838   32288 api_server.go:70] duration metric: took 2.268416174s to wait for apiserver process to appear ...
	I0810 22:51:07.944864   32288 api_server.go:86] waiting for apiserver healthz status ...
	I0810 22:51:07.944875   32288 api_server.go:239] Checking apiserver healthz at https://192.168.50.38:8443/healthz ...
	I0810 22:51:07.951156   32288 api_server.go:265] https://192.168.50.38:8443/healthz returned 200:
	ok
	I0810 22:51:07.952106   32288 api_server.go:139] control plane version: v1.17.3
	I0810 22:51:07.952143   32288 api_server.go:129] duration metric: took 7.269147ms to wait for apiserver health ...
	I0810 22:51:07.952161   32288 system_pods.go:43] waiting for kube-system pods to appear ...
	I0810 22:51:08.135853   32288 system_pods.go:59] 7 kube-system pods found
	I0810 22:51:08.135889   32288 system_pods.go:61] "coredns-6955765f44-9gbh9" [8abdfb71-f1fb-4e43-a76f-55549541d3f5] Running
	I0810 22:51:08.135894   32288 system_pods.go:61] "etcd-test-preload-20210810224820-30291" [e3b8cc33-8dbf-4dae-b2c5-b439ff424800] Running
	I0810 22:51:08.135899   32288 system_pods.go:61] "kube-apiserver-test-preload-20210810224820-30291" [6de16671-b0ca-4995-9c11-df083708513c] Running
	I0810 22:51:08.135903   32288 system_pods.go:61] "kube-controller-manager-test-preload-20210810224820-30291" [9b8c0023-5ff8-42f9-95c6-c371dd70be5a] Running
	I0810 22:51:08.135906   32288 system_pods.go:61] "kube-proxy-zvm9g" [26937be5-e3e9-4284-a360-e84c2f175b4f] Running
	I0810 22:51:08.135910   32288 system_pods.go:61] "kube-scheduler-test-preload-20210810224820-30291" [ad95c3c8-cfb6-4e03-8889-719129d9ee5d] Running
	I0810 22:51:08.135914   32288 system_pods.go:61] "storage-provisioner" [9540d6e2-c97c-415d-b6be-20e310fbd70f] Running
	I0810 22:51:08.135920   32288 system_pods.go:74] duration metric: took 183.753153ms to wait for pod list to return data ...
	I0810 22:51:08.135928   32288 default_sa.go:34] waiting for default service account to be created ...
	I0810 22:51:08.334896   32288 default_sa.go:45] found service account: "default"
	I0810 22:51:08.334922   32288 default_sa.go:55] duration metric: took 198.984832ms for default service account to be created ...
	I0810 22:51:08.334931   32288 system_pods.go:116] waiting for k8s-apps to be running ...
	I0810 22:51:08.536965   32288 system_pods.go:86] 7 kube-system pods found
	I0810 22:51:08.536999   32288 system_pods.go:89] "coredns-6955765f44-9gbh9" [8abdfb71-f1fb-4e43-a76f-55549541d3f5] Running
	I0810 22:51:08.537006   32288 system_pods.go:89] "etcd-test-preload-20210810224820-30291" [e3b8cc33-8dbf-4dae-b2c5-b439ff424800] Running
	I0810 22:51:08.537016   32288 system_pods.go:89] "kube-apiserver-test-preload-20210810224820-30291" [6de16671-b0ca-4995-9c11-df083708513c] Running
	I0810 22:51:08.537023   32288 system_pods.go:89] "kube-controller-manager-test-preload-20210810224820-30291" [9b8c0023-5ff8-42f9-95c6-c371dd70be5a] Running
	I0810 22:51:08.537028   32288 system_pods.go:89] "kube-proxy-zvm9g" [26937be5-e3e9-4284-a360-e84c2f175b4f] Running
	I0810 22:51:08.537034   32288 system_pods.go:89] "kube-scheduler-test-preload-20210810224820-30291" [ad95c3c8-cfb6-4e03-8889-719129d9ee5d] Running
	I0810 22:51:08.537039   32288 system_pods.go:89] "storage-provisioner" [9540d6e2-c97c-415d-b6be-20e310fbd70f] Running
	I0810 22:51:08.537080   32288 system_pods.go:126] duration metric: took 202.140417ms to wait for k8s-apps to be running ...
	I0810 22:51:08.537090   32288 system_svc.go:44] waiting for kubelet service to be running ....
	I0810 22:51:08.537136   32288 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0810 22:51:08.550724   32288 system_svc.go:56] duration metric: took 13.626535ms WaitForService to wait for kubelet.
	I0810 22:51:08.550749   32288 kubeadm.go:547] duration metric: took 2.874342382s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0810 22:51:08.550774   32288 node_conditions.go:102] verifying NodePressure condition ...
	I0810 22:51:08.734606   32288 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0810 22:51:08.734633   32288 node_conditions.go:123] node cpu capacity is 2
	I0810 22:51:08.734645   32288 node_conditions.go:105] duration metric: took 183.866109ms to run NodePressure ...
	I0810 22:51:08.734655   32288 start.go:231] waiting for startup goroutines ...
	I0810 22:51:08.777929   32288 start.go:462] kubectl: 1.20.5, cluster: 1.17.3 (minor skew: 3)
	I0810 22:51:08.779947   32288 out.go:177] 
	W0810 22:51:08.780087   32288 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.17.3.
	I0810 22:51:08.781616   32288 out.go:177]   - Want kubectl v1.17.3? Try 'minikube kubectl -- get pods -A'
	I0810 22:51:08.783139   32288 out.go:177] * Done! kubectl is now configured to use "test-preload-20210810224820-30291" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Tue 2021-08-10 22:48:34 UTC, end at Tue 2021-08-10 22:51:09 UTC. --
	Aug 10 22:51:09 test-preload-20210810224820-30291 crio[7740]: time="2021-08-10 22:51:09.620311269Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=9472a980-f3d6-45bf-9d81-999033d6397e name=/runtime.v1alpha2.RuntimeService/Status
	Aug 10 22:51:09 test-preload-20210810224820-30291 crio[7740]: time="2021-08-10 22:51:09.620446648Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=9472a980-f3d6-45bf-9d81-999033d6397e name=/runtime.v1alpha2.RuntimeService/Status
	Aug 10 22:51:09 test-preload-20210810224820-30291 crio[7740]: time="2021-08-10 22:51:09.660891632Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=26189435-f943-4cbf-9209-059af3ac8e72 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:51:09 test-preload-20210810224820-30291 crio[7740]: time="2021-08-10 22:51:09.661034326Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=26189435-f943-4cbf-9209-059af3ac8e72 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:51:09 test-preload-20210810224820-30291 crio[7740]: time="2021-08-10 22:51:09.661317773Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:71fb588b5749d72c0be2a19a31e081cdd39c03185a093084126222ea78b43f8e,PodSandboxId:8817e7bc2fbab931abbf90e13416d071181d83b00531455160b6a7cc899b2088,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:e5cfb363c7caf31fd5a2a3dbd37b6bfc96099ce20209de8e6f55e00ae7ff56c7,State:CONTAINER_RUNNING,CreatedAt:1628635863755333775,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zvm9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26937be5-e3e9-4284-a360-e84c2f175b4f,},Annotations:map[string]string{io.kubernetes.container.hash: ecf76616,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5460150a2fee8086d52a326ac54ea685b608f48471759d6268917322e4aa38d3,PodSandboxId:5d6221697b2798582f8c5d189f1bf1b3372b25cba9c17d3ef7e345aea540944e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1628635863329161820,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9540d6e2-c97c-415d-b6be-20e310fbd70f,},Annotations:map[string]string{io.kubernetes.container.hash: 732f1ed2,io.kubernetes.container.restartCount: 3,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d9178227b406d73f10966c0ec1378aba7c4b728f22785fb68f48edaeb13e31d,PodSandboxId:51ebf16de90b214fe88bd71ea32d688a4e2b2b7b9c164da070627206dd35a1da,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:1951ff35852d301efa66f57928db5d5caa654068e5c6ea4d7b1c039dad1000d2,State:CONTAINER_RUNNING,CreatedAt:1628635863274171975,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6955765f44-9gbh9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8abdfb71-f1fb-4e43-a76f-55549541d3f5,},Annotations:map[string]string{io.kubernetes.container.hash: f9943294,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c99a52e5c8b3eb6ccf886219ddd66b283ac9b42d473602412b6db9e7ddb40ce,PodSandboxId:582d428e2ea5444151da095990afee1505e245275e50a320d3268353cf8c5dd2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d109c0821a2b9225b69b99a95000df5cd1de5d606bc187b3620d730d7769c6ad,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:f1b432e0ed98f3e2792cf81d6eaa62dc4bf37dd2efd9000b21d206e891ab1fb4,State:CONTAINER_RUNNING,CreatedAt:1628635856304242792,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-20210810224820-30291,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 29b5a3494fd7c53351d2b61e9b662a3a,},Annotations:map[string]string{io.kubernetes.container.hash: e3642d59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e805744318b0952975f4eabe512214832e121a2caf56cdfb9a0fff7697b3131,PodSandboxId:b1a3bfcf9c6347c38047f5a72241e608e61d18a072f4947a076ef66eb915c13a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b0f1517c1f4bb153597033d2efd81a9ac630e6a569307f993b2c0368afcf0302,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:4be02aa0403799a8de33152a8e495114960f0ace9c1dacebc52df978f58f3555,State:CONTAINER_RUNNING,CreatedAt:1628635856141920729,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-20210810224820-30291,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: c7178d8492f798ee160e507a1f6158eb,},Annotations:map[string]string{io.kubernetes.container.hash: 313be8e8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:225594d64682e26b96c0cce3458f20b8fada934b3d0a3c56e60f9bfd52f3dd75,PodSandboxId:32fb6a73a8e5a533e5fc8933f6e4b875deeca190a7f917a4e754cf94d9b8bb4a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90d27391b7808cde8d9a81cfa43b1e81de5c4912b4b52a7dccb19eb4fe3c236b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:692cc58967c316106ce83ca6a5c6617c2226126d44931e9372450364ad64c97b,State:CONTAINER_RUNNING,CreatedAt:1628635856113699504,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-20210810224820-30291,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 208e7765a66a346c75b6f7586d2fa387,},Annotations:map[string]string{io.kubernetes.container.hash: bab94b4f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19fe0b5a36f466184cf564cc711e40d996218f45ea352bad3c467ec0324f0a18,PodSandboxId:b1771928472232e4da441e292937469cc490a0b7a79c4ce484c7fc7cffbe8cf6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:cfbff4b3797c497fcb2b550d8ce345151f79f7e650212cd63bf2dc3cd5428274,State:CONTAINER_RUNNING,CreatedAt:1628635855826048341,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-20210810224820-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 23fac27a2c6b2f58cd70a4c9dc8d0da6,},Annotations:map[string]string{io.kubernetes.container.hash: f1d80633,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9f131c8976914b86f7ef77a7914c676182f1e4fa608de3ce864138068cb90c3,PodSandboxId:5d6221697b2798582f8c5d189f1bf1b3372b25cba9c17d3ef7e345aea540944e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1628635830604915673,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9540d6e2-c97c-415d-b6be-20e310fbd70f,},Annotations:map[string]string{io.kubernetes.container.hash: 732f1ed2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41003a20744ba44844bac954b01f50201088f18d7455625552f316ad5bf12024,PodSandboxId:51ebf16de90b214fe88bd71ea32d688a4e2b2b7b9c164da070627206dd35a1da,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,Annotations:map[string]string{},},ImageRef:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,State:CONTAINER_EXITED,CreatedAt:1628635813574363514,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6955765f44-9gbh9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8abdfb71-f1fb-4e43-a76f-55549541d3f5,},Annotations:map[string]strin
g{io.kubernetes.container.hash: f9943294,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e1d34a786ab389aa821b4f9c92713d5556d250d23142b50dfc40e1fbc8965c4,PodSandboxId:8817e7bc2fbab931abbf90e13416d071181d83b00531455160b6a7cc899b2088,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,Annotations:map[string]string{},},ImageRef:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,State:CONTAINER_EXITED,CreatedAt:1628635812356596398,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-zvm9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26937be5-e3e9-4284-a360-e84c2f175b4f,},Annotations:map[string]string{io.kubernetes.container.hash: ecf76616,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edde24c1a9c539830d810107febdd0ceaffbf9652c9cd6fa9b4faa07af471a26,PodSandboxId:b1771928472232e4da441e292937469cc490a0b7a79c4ce484c7fc7cffbe8cf6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,State:CONTAINER_EXITED,CreatedAt:1628635788431831211,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-20210810224820-30291,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 23fac27a2c6b2f58cd70a4c9dc8d0da6,},Annotations:map[string]string{io.kubernetes.container.hash: f1d80633,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9029fc5bb880e165bc23810305ee69777ad51357946b544b0960039cfc7e8870,PodSandboxId:b231d093e88f22e3b7214d52cd57d587762a584f1cc852d9d3e226930f794bab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056,Annotations:map[string]string{},},ImageRef:5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056,State:CONTAINER_EXITED,CreatedAt:1628635787343427526,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-20210810224820-30291,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 603b914543a305bf066dc8de01ce2232,},Annotations:map[string]string{io.kubernetes.container.hash: 589bcd22,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04efcb33acc3b6322ec09f6ab03cc72fb4eef3851be315661768b2c28b7580ca,PodSandboxId:f379dea9d5ffcac7e24e23f2e0dab8b1fc4800f05db9951983bce94277e3379a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2,Annotations:map[string]string{},},ImageRef:0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2,State:CONTAINER_EXITED,CreatedAt:1628635787160840275,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-20210810224820-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef1fd623bd23
c3b480f66cfc795fa0ab,},Annotations:map[string]string{io.kubernetes.container.hash: 37ef62f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c53eed59215f6778e54e6b1db239641870d0f6cbb1005d78553d81db5c7c652d,PodSandboxId:8129021bbf4ea8e5b5171ea9104ac69b5e520e6eb2031086365e913888c40529,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28,Annotations:map[string]string{},},ImageRef:78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28,State:CONTAINER_EXITED,CreatedAt:1628635787124721607,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-20210810224820-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb577061a17ad23cfbbf52e9419bf32a,},Annotations
:map[string]string{io.kubernetes.container.hash: 99930feb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=26189435-f943-4cbf-9209-059af3ac8e72 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:51:09 test-preload-20210810224820-30291 crio[7740]: time="2021-08-10 22:51:09.703607222Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a02381c5-cdd9-4c83-a6a1-3de6f9439f1c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:51:09 test-preload-20210810224820-30291 crio[7740]: time="2021-08-10 22:51:09.703811509Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a02381c5-cdd9-4c83-a6a1-3de6f9439f1c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:51:09 test-preload-20210810224820-30291 crio[7740]: time="2021-08-10 22:51:09.704095465Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:71fb588b5749d72c0be2a19a31e081cdd39c03185a093084126222ea78b43f8e,PodSandboxId:8817e7bc2fbab931abbf90e13416d071181d83b00531455160b6a7cc899b2088,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:e5cfb363c7caf31fd5a2a3dbd37b6bfc96099ce20209de8e6f55e00ae7ff56c7,State:CONTAINER_RUNNING,CreatedAt:1628635863755333775,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zvm9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26937be5-e3e9-4284-a360-e84c2f175b4f,},Annotations:map[string]string{io.kubernetes.container.hash: ecf76616,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5460150a2fee8086d52a326ac54ea685b608f48471759d6268917322e4aa38d3,PodSandboxId:5d6221697b2798582f8c5d189f1bf1b3372b25cba9c17d3ef7e345aea540944e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1628635863329161820,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9540d6e2-c97c-415d-b6be-20e310fbd70f,},Annotations:map[string]string{io.kubernetes.container.hash: 732f1ed2,io.kubernetes.container.restartCount: 3,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d9178227b406d73f10966c0ec1378aba7c4b728f22785fb68f48edaeb13e31d,PodSandboxId:51ebf16de90b214fe88bd71ea32d688a4e2b2b7b9c164da070627206dd35a1da,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:1951ff35852d301efa66f57928db5d5caa654068e5c6ea4d7b1c039dad1000d2,State:CONTAINER_RUNNING,CreatedAt:1628635863274171975,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6955765f44-9gbh9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8abdfb71-f1fb-4e43-a76f-55549541d3f5,},Annotations:map[string]string{io.kubernetes.container.hash: f9943294,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c99a52e5c8b3eb6ccf886219ddd66b283ac9b42d473602412b6db9e7ddb40ce,PodSandboxId:582d428e2ea5444151da095990afee1505e245275e50a320d3268353cf8c5dd2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d109c0821a2b9225b69b99a95000df5cd1de5d606bc187b3620d730d7769c6ad,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:f1b432e0ed98f3e2792cf81d6eaa62dc4bf37dd2efd9000b21d206e891ab1fb4,State:CONTAINER_RUNNING,CreatedAt:1628635856304242792,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-20210810224820-30291,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 29b5a3494fd7c53351d2b61e9b662a3a,},Annotations:map[string]string{io.kubernetes.container.hash: e3642d59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e805744318b0952975f4eabe512214832e121a2caf56cdfb9a0fff7697b3131,PodSandboxId:b1a3bfcf9c6347c38047f5a72241e608e61d18a072f4947a076ef66eb915c13a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b0f1517c1f4bb153597033d2efd81a9ac630e6a569307f993b2c0368afcf0302,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:4be02aa0403799a8de33152a8e495114960f0ace9c1dacebc52df978f58f3555,State:CONTAINER_RUNNING,CreatedAt:1628635856141920729,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-20210810224820-30291,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: c7178d8492f798ee160e507a1f6158eb,},Annotations:map[string]string{io.kubernetes.container.hash: 313be8e8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:225594d64682e26b96c0cce3458f20b8fada934b3d0a3c56e60f9bfd52f3dd75,PodSandboxId:32fb6a73a8e5a533e5fc8933f6e4b875deeca190a7f917a4e754cf94d9b8bb4a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90d27391b7808cde8d9a81cfa43b1e81de5c4912b4b52a7dccb19eb4fe3c236b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:692cc58967c316106ce83ca6a5c6617c2226126d44931e9372450364ad64c97b,State:CONTAINER_RUNNING,CreatedAt:1628635856113699504,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-20210810224820-30291,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 208e7765a66a346c75b6f7586d2fa387,},Annotations:map[string]string{io.kubernetes.container.hash: bab94b4f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19fe0b5a36f466184cf564cc711e40d996218f45ea352bad3c467ec0324f0a18,PodSandboxId:b1771928472232e4da441e292937469cc490a0b7a79c4ce484c7fc7cffbe8cf6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:cfbff4b3797c497fcb2b550d8ce345151f79f7e650212cd63bf2dc3cd5428274,State:CONTAINER_RUNNING,CreatedAt:1628635855826048341,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-20210810224820-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 23fac27a2c6b2f58cd70a4c9dc8d0da6,},Annotations:map[string]string{io.kubernetes.container.hash: f1d80633,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9f131c8976914b86f7ef77a7914c676182f1e4fa608de3ce864138068cb90c3,PodSandboxId:5d6221697b2798582f8c5d189f1bf1b3372b25cba9c17d3ef7e345aea540944e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1628635830604915673,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9540d6e2-c97c-415d-b6be-20e310fbd70f,},Annotations:map[string]string{io.kubernetes.container.hash: 732f1ed2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41003a20744ba44844bac954b01f50201088f18d7455625552f316ad5bf12024,PodSandboxId:51ebf16de90b214fe88bd71ea32d688a4e2b2b7b9c164da070627206dd35a1da,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,Annotations:map[string]string{},},ImageRef:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,State:CONTAINER_EXITED,CreatedAt:1628635813574363514,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6955765f44-9gbh9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8abdfb71-f1fb-4e43-a76f-55549541d3f5,},Annotations:map[string]strin
g{io.kubernetes.container.hash: f9943294,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e1d34a786ab389aa821b4f9c92713d5556d250d23142b50dfc40e1fbc8965c4,PodSandboxId:8817e7bc2fbab931abbf90e13416d071181d83b00531455160b6a7cc899b2088,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,Annotations:map[string]string{},},ImageRef:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,State:CONTAINER_EXITED,CreatedAt:1628635812356596398,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-zvm9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26937be5-e3e9-4284-a360-e84c2f175b4f,},Annotations:map[string]string{io.kubernetes.container.hash: ecf76616,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edde24c1a9c539830d810107febdd0ceaffbf9652c9cd6fa9b4faa07af471a26,PodSandboxId:b1771928472232e4da441e292937469cc490a0b7a79c4ce484c7fc7cffbe8cf6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,State:CONTAINER_EXITED,CreatedAt:1628635788431831211,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-20210810224820-30291,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 23fac27a2c6b2f58cd70a4c9dc8d0da6,},Annotations:map[string]string{io.kubernetes.container.hash: f1d80633,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9029fc5bb880e165bc23810305ee69777ad51357946b544b0960039cfc7e8870,PodSandboxId:b231d093e88f22e3b7214d52cd57d587762a584f1cc852d9d3e226930f794bab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056,Annotations:map[string]string{},},ImageRef:5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056,State:CONTAINER_EXITED,CreatedAt:1628635787343427526,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-20210810224820-30291,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 603b914543a305bf066dc8de01ce2232,},Annotations:map[string]string{io.kubernetes.container.hash: 589bcd22,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04efcb33acc3b6322ec09f6ab03cc72fb4eef3851be315661768b2c28b7580ca,PodSandboxId:f379dea9d5ffcac7e24e23f2e0dab8b1fc4800f05db9951983bce94277e3379a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2,Annotations:map[string]string{},},ImageRef:0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2,State:CONTAINER_EXITED,CreatedAt:1628635787160840275,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-20210810224820-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef1fd623bd23
c3b480f66cfc795fa0ab,},Annotations:map[string]string{io.kubernetes.container.hash: 37ef62f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c53eed59215f6778e54e6b1db239641870d0f6cbb1005d78553d81db5c7c652d,PodSandboxId:8129021bbf4ea8e5b5171ea9104ac69b5e520e6eb2031086365e913888c40529,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28,Annotations:map[string]string{},},ImageRef:78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28,State:CONTAINER_EXITED,CreatedAt:1628635787124721607,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-20210810224820-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb577061a17ad23cfbbf52e9419bf32a,},Annotations
:map[string]string{io.kubernetes.container.hash: 99930feb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a02381c5-cdd9-4c83-a6a1-3de6f9439f1c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:51:09 test-preload-20210810224820-30291 crio[7740]: time="2021-08-10 22:51:09.747216103Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e55c26a5-5509-44db-aee6-78798bafe7b6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:51:09 test-preload-20210810224820-30291 crio[7740]: time="2021-08-10 22:51:09.747277866Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e55c26a5-5509-44db-aee6-78798bafe7b6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:51:09 test-preload-20210810224820-30291 crio[7740]: time="2021-08-10 22:51:09.747527049Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:71fb588b5749d72c0be2a19a31e081cdd39c03185a093084126222ea78b43f8e,PodSandboxId:8817e7bc2fbab931abbf90e13416d071181d83b00531455160b6a7cc899b2088,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:e5cfb363c7caf31fd5a2a3dbd37b6bfc96099ce20209de8e6f55e00ae7ff56c7,State:CONTAINER_RUNNING,CreatedAt:1628635863755333775,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zvm9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26937be5-e3e9-4284-a360-e84c2f175b4f,},Annotations:map[string]string{io.kubernetes.container.hash: ecf76616,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5460150a2fee8086d52a326ac54ea685b608f48471759d6268917322e4aa38d3,PodSandboxId:5d6221697b2798582f8c5d189f1bf1b3372b25cba9c17d3ef7e345aea540944e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1628635863329161820,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9540d6e2-c97c-415d-b6be-20e310fbd70f,},Annotations:map[string]string{io.kubernetes.container.hash: 732f1ed2,io.kubernetes.container.restartCount: 3,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d9178227b406d73f10966c0ec1378aba7c4b728f22785fb68f48edaeb13e31d,PodSandboxId:51ebf16de90b214fe88bd71ea32d688a4e2b2b7b9c164da070627206dd35a1da,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:1951ff35852d301efa66f57928db5d5caa654068e5c6ea4d7b1c039dad1000d2,State:CONTAINER_RUNNING,CreatedAt:1628635863274171975,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6955765f44-9gbh9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8abdfb71-f1fb-4e43-a76f-55549541d3f5,},Annotations:map[string]string{io.kubernetes.container.hash: f9943294,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c99a52e5c8b3eb6ccf886219ddd66b283ac9b42d473602412b6db9e7ddb40ce,PodSandboxId:582d428e2ea5444151da095990afee1505e245275e50a320d3268353cf8c5dd2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d109c0821a2b9225b69b99a95000df5cd1de5d606bc187b3620d730d7769c6ad,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:f1b432e0ed98f3e2792cf81d6eaa62dc4bf37dd2efd9000b21d206e891ab1fb4,State:CONTAINER_RUNNING,CreatedAt:1628635856304242792,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-20210810224820-30291,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 29b5a3494fd7c53351d2b61e9b662a3a,},Annotations:map[string]string{io.kubernetes.container.hash: e3642d59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e805744318b0952975f4eabe512214832e121a2caf56cdfb9a0fff7697b3131,PodSandboxId:b1a3bfcf9c6347c38047f5a72241e608e61d18a072f4947a076ef66eb915c13a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b0f1517c1f4bb153597033d2efd81a9ac630e6a569307f993b2c0368afcf0302,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:4be02aa0403799a8de33152a8e495114960f0ace9c1dacebc52df978f58f3555,State:CONTAINER_RUNNING,CreatedAt:1628635856141920729,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-20210810224820-30291,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: c7178d8492f798ee160e507a1f6158eb,},Annotations:map[string]string{io.kubernetes.container.hash: 313be8e8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:225594d64682e26b96c0cce3458f20b8fada934b3d0a3c56e60f9bfd52f3dd75,PodSandboxId:32fb6a73a8e5a533e5fc8933f6e4b875deeca190a7f917a4e754cf94d9b8bb4a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90d27391b7808cde8d9a81cfa43b1e81de5c4912b4b52a7dccb19eb4fe3c236b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:692cc58967c316106ce83ca6a5c6617c2226126d44931e9372450364ad64c97b,State:CONTAINER_RUNNING,CreatedAt:1628635856113699504,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-20210810224820-30291,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 208e7765a66a346c75b6f7586d2fa387,},Annotations:map[string]string{io.kubernetes.container.hash: bab94b4f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19fe0b5a36f466184cf564cc711e40d996218f45ea352bad3c467ec0324f0a18,PodSandboxId:b1771928472232e4da441e292937469cc490a0b7a79c4ce484c7fc7cffbe8cf6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:cfbff4b3797c497fcb2b550d8ce345151f79f7e650212cd63bf2dc3cd5428274,State:CONTAINER_RUNNING,CreatedAt:1628635855826048341,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-20210810224820-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 23fac27a2c6b2f58cd70a4c9dc8d0da6,},Annotations:map[string]string{io.kubernetes.container.hash: f1d80633,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9f131c8976914b86f7ef77a7914c676182f1e4fa608de3ce864138068cb90c3,PodSandboxId:5d6221697b2798582f8c5d189f1bf1b3372b25cba9c17d3ef7e345aea540944e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1628635830604915673,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9540d6e2-c97c-415d-b6be-20e310fbd70f,},Annotations:map[string]string{io.kubernetes.container.hash: 732f1ed2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41003a20744ba44844bac954b01f50201088f18d7455625552f316ad5bf12024,PodSandboxId:51ebf16de90b214fe88bd71ea32d688a4e2b2b7b9c164da070627206dd35a1da,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,Annotations:map[string]string{},},ImageRef:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,State:CONTAINER_EXITED,CreatedAt:1628635813574363514,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6955765f44-9gbh9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8abdfb71-f1fb-4e43-a76f-55549541d3f5,},Annotations:map[string]strin
g{io.kubernetes.container.hash: f9943294,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e1d34a786ab389aa821b4f9c92713d5556d250d23142b50dfc40e1fbc8965c4,PodSandboxId:8817e7bc2fbab931abbf90e13416d071181d83b00531455160b6a7cc899b2088,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,Annotations:map[string]string{},},ImageRef:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,State:CONTAINER_EXITED,CreatedAt:1628635812356596398,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-zvm9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26937be5-e3e9-4284-a360-e84c2f175b4f,},Annotations:map[string]string{io.kubernetes.container.hash: ecf76616,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edde24c1a9c539830d810107febdd0ceaffbf9652c9cd6fa9b4faa07af471a26,PodSandboxId:b1771928472232e4da441e292937469cc490a0b7a79c4ce484c7fc7cffbe8cf6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,State:CONTAINER_EXITED,CreatedAt:1628635788431831211,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-20210810224820-30291,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 23fac27a2c6b2f58cd70a4c9dc8d0da6,},Annotations:map[string]string{io.kubernetes.container.hash: f1d80633,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9029fc5bb880e165bc23810305ee69777ad51357946b544b0960039cfc7e8870,PodSandboxId:b231d093e88f22e3b7214d52cd57d587762a584f1cc852d9d3e226930f794bab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056,Annotations:map[string]string{},},ImageRef:5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056,State:CONTAINER_EXITED,CreatedAt:1628635787343427526,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-20210810224820-30291,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 603b914543a305bf066dc8de01ce2232,},Annotations:map[string]string{io.kubernetes.container.hash: 589bcd22,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04efcb33acc3b6322ec09f6ab03cc72fb4eef3851be315661768b2c28b7580ca,PodSandboxId:f379dea9d5ffcac7e24e23f2e0dab8b1fc4800f05db9951983bce94277e3379a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2,Annotations:map[string]string{},},ImageRef:0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2,State:CONTAINER_EXITED,CreatedAt:1628635787160840275,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-20210810224820-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef1fd623bd23
c3b480f66cfc795fa0ab,},Annotations:map[string]string{io.kubernetes.container.hash: 37ef62f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c53eed59215f6778e54e6b1db239641870d0f6cbb1005d78553d81db5c7c652d,PodSandboxId:8129021bbf4ea8e5b5171ea9104ac69b5e520e6eb2031086365e913888c40529,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28,Annotations:map[string]string{},},ImageRef:78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28,State:CONTAINER_EXITED,CreatedAt:1628635787124721607,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-20210810224820-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb577061a17ad23cfbbf52e9419bf32a,},Annotations
:map[string]string{io.kubernetes.container.hash: 99930feb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e55c26a5-5509-44db-aee6-78798bafe7b6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:51:09 test-preload-20210810224820-30291 crio[7740]: time="2021-08-10 22:51:09.751447582Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=5f187e4d-c144-41b8-a11c-ca2a13f29b33 name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Aug 10 22:51:09 test-preload-20210810224820-30291 crio[7740]: time="2021-08-10 22:51:09.751863752Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:32fb6a73a8e5a533e5fc8933f6e4b875deeca190a7f917a4e754cf94d9b8bb4a,Metadata:&PodSandboxMetadata{Name:kube-apiserver-test-preload-20210810224820-30291,Uid:208e7765a66a346c75b6f7586d2fa387,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1628635855187795390,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-test-preload-20210810224820-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 208e7765a66a346c75b6f7586d2fa387,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 208e7765a66a346c75b6f7586d2fa387,kubernetes.io/config.seen: 2021-08-10T22:50:54.304291628Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b1a3bfcf9c6347c38047f5a72241e608e61d18a072f4947a076ef66eb915c13a,M
etadata:&PodSandboxMetadata{Name:kube-controller-manager-test-preload-20210810224820-30291,Uid:c7178d8492f798ee160e507a1f6158eb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1628635855164896510,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-test-preload-20210810224820-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7178d8492f798ee160e507a1f6158eb,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c7178d8492f798ee160e507a1f6158eb,kubernetes.io/config.seen: 2021-08-10T22:50:54.304279365Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:582d428e2ea5444151da095990afee1505e245275e50a320d3268353cf8c5dd2,Metadata:&PodSandboxMetadata{Name:kube-scheduler-test-preload-20210810224820-30291,Uid:29b5a3494fd7c53351d2b61e9b662a3a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1628635855160086669,Labels:map[string]string{compon
ent: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-test-preload-20210810224820-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29b5a3494fd7c53351d2b61e9b662a3a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 29b5a3494fd7c53351d2b61e9b662a3a,kubernetes.io/config.seen: 2021-08-10T22:50:54.304285881Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5d6221697b2798582f8c5d189f1bf1b3372b25cba9c17d3ef7e345aea540944e,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:9540d6e2-c97c-415d-b6be-20e310fbd70f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1628635814271147064,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9540d6e2-c97c-415d-b6be-20e310fbd70f,},Annotations:map[string]string{
kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2021-08-10T22:50:13.206951437Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:51ebf16de90b214fe88bd71ea32d688a4e2b2b7b9c164da070627206dd35a1da,Metadata:&PodSandboxMetadata{Name:coredns-6955765f44-9gbh9,Uid:8abdfb71-f1fb-4e43-a76f-55549541d3f5,Namespace:kube-system
,Attempt:0,},State:SANDBOX_READY,CreatedAt:1628635812971011755,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6955765f44-9gbh9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8abdfb71-f1fb-4e43-a76f-55549541d3f5,k8s-app: kube-dns,pod-template-hash: 6955765f44,},Annotations:map[string]string{kubernetes.io/config.seen: 2021-08-10T22:50:12.523568229Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8817e7bc2fbab931abbf90e13416d071181d83b00531455160b6a7cc899b2088,Metadata:&PodSandboxMetadata{Name:kube-proxy-zvm9g,Uid:26937be5-e3e9-4284-a360-e84c2f175b4f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1628635811662114914,Labels:map[string]string{controller-revision-hash: 68bd87b66,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-zvm9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26937be5-e3e9-4284-a360-e84c2f175b4f,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[str
ing]string{kubernetes.io/config.seen: 2021-08-10T22:50:11.309447742Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8129021bbf4ea8e5b5171ea9104ac69b5e520e6eb2031086365e913888c40529,Metadata:&PodSandboxMetadata{Name:kube-scheduler-test-preload-20210810224820-30291,Uid:bb577061a17ad23cfbbf52e9419bf32a,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1628635786232480746,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-test-preload-20210810224820-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb577061a17ad23cfbbf52e9419bf32a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: bb577061a17ad23cfbbf52e9419bf32a,kubernetes.io/config.seen: 2021-08-10T22:49:44.337549067Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b1771928472232e4da441e292937469cc490a0b7a79c4ce484c7fc7cffbe8cf6,Metadata:&PodSandboxMetadata{Name:etcd-test-preload-2021
0810224820-30291,Uid:23fac27a2c6b2f58cd70a4c9dc8d0da6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1628635786228863135,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-test-preload-20210810224820-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23fac27a2c6b2f58cd70a4c9dc8d0da6,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 23fac27a2c6b2f58cd70a4c9dc8d0da6,kubernetes.io/config.seen: 2021-08-10T22:49:44.33755471Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f379dea9d5ffcac7e24e23f2e0dab8b1fc4800f05db9951983bce94277e3379a,Metadata:&PodSandboxMetadata{Name:kube-apiserver-test-preload-20210810224820-30291,Uid:ef1fd623bd23c3b480f66cfc795fa0ab,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1628635786210894743,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-test-preload-2021
0810224820-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef1fd623bd23c3b480f66cfc795fa0ab,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ef1fd623bd23c3b480f66cfc795fa0ab,kubernetes.io/config.seen: 2021-08-10T22:49:44.337557499Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b231d093e88f22e3b7214d52cd57d587762a584f1cc852d9d3e226930f794bab,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-test-preload-20210810224820-30291,Uid:603b914543a305bf066dc8de01ce2232,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1628635786198440349,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-test-preload-20210810224820-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 603b914543a305bf066dc8de01ce2232,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 603b914543a305bf066dc8de01ce2232,kuber
netes.io/config.seen: 2021-08-10T22:49:44.337559972Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=5f187e4d-c144-41b8-a11c-ca2a13f29b33 name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Aug 10 22:51:09 test-preload-20210810224820-30291 crio[7740]: time="2021-08-10 22:51:09.753086539Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=039fd236-8872-443f-9018-4af388ef6148 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:51:09 test-preload-20210810224820-30291 crio[7740]: time="2021-08-10 22:51:09.753140991Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=039fd236-8872-443f-9018-4af388ef6148 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:51:09 test-preload-20210810224820-30291 crio[7740]: time="2021-08-10 22:51:09.753376805Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:71fb588b5749d72c0be2a19a31e081cdd39c03185a093084126222ea78b43f8e,PodSandboxId:8817e7bc2fbab931abbf90e13416d071181d83b00531455160b6a7cc899b2088,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:e5cfb363c7caf31fd5a2a3dbd37b6bfc96099ce20209de8e6f55e00ae7ff56c7,State:CONTAINER_RUNNING,CreatedAt:1628635863755333775,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zvm9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26937be5-e3e9-4284-a360-e84c2f175b4f,},Annotations:map[string]string{io.kubernetes.container.hash: ecf76616,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5460150a2fee8086d52a326ac54ea685b608f48471759d6268917322e4aa38d3,PodSandboxId:5d6221697b2798582f8c5d189f1bf1b3372b25cba9c17d3ef7e345aea540944e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1628635863329161820,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9540d6e2-c97c-415d-b6be-20e310fbd70f,},Annotations:map[string]string{io.kubernetes.container.hash: 732f1ed2,io.kubernetes.container.restartCount: 3,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d9178227b406d73f10966c0ec1378aba7c4b728f22785fb68f48edaeb13e31d,PodSandboxId:51ebf16de90b214fe88bd71ea32d688a4e2b2b7b9c164da070627206dd35a1da,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:1951ff35852d301efa66f57928db5d5caa654068e5c6ea4d7b1c039dad1000d2,State:CONTAINER_RUNNING,CreatedAt:1628635863274171975,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6955765f44-9gbh9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8abdfb71-f1fb-4e43-a76f-55549541d3f5,},Annotations:map[string]string{io.kubernetes.container.hash: f9943294,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c99a52e5c8b3eb6ccf886219ddd66b283ac9b42d473602412b6db9e7ddb40ce,PodSandboxId:582d428e2ea5444151da095990afee1505e245275e50a320d3268353cf8c5dd2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d109c0821a2b9225b69b99a95000df5cd1de5d606bc187b3620d730d7769c6ad,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:f1b432e0ed98f3e2792cf81d6eaa62dc4bf37dd2efd9000b21d206e891ab1fb4,State:CONTAINER_RUNNING,CreatedAt:1628635856304242792,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-20210810224820-30291,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 29b5a3494fd7c53351d2b61e9b662a3a,},Annotations:map[string]string{io.kubernetes.container.hash: e3642d59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e805744318b0952975f4eabe512214832e121a2caf56cdfb9a0fff7697b3131,PodSandboxId:b1a3bfcf9c6347c38047f5a72241e608e61d18a072f4947a076ef66eb915c13a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b0f1517c1f4bb153597033d2efd81a9ac630e6a569307f993b2c0368afcf0302,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:4be02aa0403799a8de33152a8e495114960f0ace9c1dacebc52df978f58f3555,State:CONTAINER_RUNNING,CreatedAt:1628635856141920729,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-20210810224820-30291,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: c7178d8492f798ee160e507a1f6158eb,},Annotations:map[string]string{io.kubernetes.container.hash: 313be8e8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:225594d64682e26b96c0cce3458f20b8fada934b3d0a3c56e60f9bfd52f3dd75,PodSandboxId:32fb6a73a8e5a533e5fc8933f6e4b875deeca190a7f917a4e754cf94d9b8bb4a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90d27391b7808cde8d9a81cfa43b1e81de5c4912b4b52a7dccb19eb4fe3c236b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:692cc58967c316106ce83ca6a5c6617c2226126d44931e9372450364ad64c97b,State:CONTAINER_RUNNING,CreatedAt:1628635856113699504,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-20210810224820-30291,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 208e7765a66a346c75b6f7586d2fa387,},Annotations:map[string]string{io.kubernetes.container.hash: bab94b4f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19fe0b5a36f466184cf564cc711e40d996218f45ea352bad3c467ec0324f0a18,PodSandboxId:b1771928472232e4da441e292937469cc490a0b7a79c4ce484c7fc7cffbe8cf6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:cfbff4b3797c497fcb2b550d8ce345151f79f7e650212cd63bf2dc3cd5428274,State:CONTAINER_RUNNING,CreatedAt:1628635855826048341,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-20210810224820-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 23fac27a2c6b2f58cd70a4c9dc8d0da6,},Annotations:map[string]string{io.kubernetes.container.hash: f1d80633,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9f131c8976914b86f7ef77a7914c676182f1e4fa608de3ce864138068cb90c3,PodSandboxId:5d6221697b2798582f8c5d189f1bf1b3372b25cba9c17d3ef7e345aea540944e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1628635830604915673,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9540d6e2-c97c-415d-b6be-20e310fbd70f,},Annotations:map[string]string{io.kubernetes.container.hash: 732f1ed2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41003a20744ba44844bac954b01f50201088f18d7455625552f316ad5bf12024,PodSandboxId:51ebf16de90b214fe88bd71ea32d688a4e2b2b7b9c164da070627206dd35a1da,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,Annotations:map[string]string{},},ImageRef:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,State:CONTAINER_EXITED,CreatedAt:1628635813574363514,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6955765f44-9gbh9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8abdfb71-f1fb-4e43-a76f-55549541d3f5,},Annotations:map[string]strin
g{io.kubernetes.container.hash: f9943294,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e1d34a786ab389aa821b4f9c92713d5556d250d23142b50dfc40e1fbc8965c4,PodSandboxId:8817e7bc2fbab931abbf90e13416d071181d83b00531455160b6a7cc899b2088,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,Annotations:map[string]string{},},ImageRef:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,State:CONTAINER_EXITED,CreatedAt:1628635812356596398,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-zvm9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26937be5-e3e9-4284-a360-e84c2f175b4f,},Annotations:map[string]string{io.kubernetes.container.hash: ecf76616,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edde24c1a9c539830d810107febdd0ceaffbf9652c9cd6fa9b4faa07af471a26,PodSandboxId:b1771928472232e4da441e292937469cc490a0b7a79c4ce484c7fc7cffbe8cf6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,State:CONTAINER_EXITED,CreatedAt:1628635788431831211,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-20210810224820-30291,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 23fac27a2c6b2f58cd70a4c9dc8d0da6,},Annotations:map[string]string{io.kubernetes.container.hash: f1d80633,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9029fc5bb880e165bc23810305ee69777ad51357946b544b0960039cfc7e8870,PodSandboxId:b231d093e88f22e3b7214d52cd57d587762a584f1cc852d9d3e226930f794bab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056,Annotations:map[string]string{},},ImageRef:5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056,State:CONTAINER_EXITED,CreatedAt:1628635787343427526,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-20210810224820-30291,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 603b914543a305bf066dc8de01ce2232,},Annotations:map[string]string{io.kubernetes.container.hash: 589bcd22,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04efcb33acc3b6322ec09f6ab03cc72fb4eef3851be315661768b2c28b7580ca,PodSandboxId:f379dea9d5ffcac7e24e23f2e0dab8b1fc4800f05db9951983bce94277e3379a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2,Annotations:map[string]string{},},ImageRef:0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2,State:CONTAINER_EXITED,CreatedAt:1628635787160840275,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-20210810224820-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef1fd623bd23
c3b480f66cfc795fa0ab,},Annotations:map[string]string{io.kubernetes.container.hash: 37ef62f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c53eed59215f6778e54e6b1db239641870d0f6cbb1005d78553d81db5c7c652d,PodSandboxId:8129021bbf4ea8e5b5171ea9104ac69b5e520e6eb2031086365e913888c40529,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28,Annotations:map[string]string{},},ImageRef:78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28,State:CONTAINER_EXITED,CreatedAt:1628635787124721607,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-20210810224820-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb577061a17ad23cfbbf52e9419bf32a,},Annotations
:map[string]string{io.kubernetes.container.hash: 99930feb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=039fd236-8872-443f-9018-4af388ef6148 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:51:09 test-preload-20210810224820-30291 crio[7740]: time="2021-08-10 22:51:09.792885629Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7c637dc9-847f-456d-93fd-1e8032bdd2f2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:51:09 test-preload-20210810224820-30291 crio[7740]: time="2021-08-10 22:51:09.792940277Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7c637dc9-847f-456d-93fd-1e8032bdd2f2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:51:09 test-preload-20210810224820-30291 crio[7740]: time="2021-08-10 22:51:09.793220447Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:71fb588b5749d72c0be2a19a31e081cdd39c03185a093084126222ea78b43f8e,PodSandboxId:8817e7bc2fbab931abbf90e13416d071181d83b00531455160b6a7cc899b2088,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:e5cfb363c7caf31fd5a2a3dbd37b6bfc96099ce20209de8e6f55e00ae7ff56c7,State:CONTAINER_RUNNING,CreatedAt:1628635863755333775,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zvm9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26937be5-e3e9-4284-a360-e84c2f175b4f,},Annotations:map[string]string{io.kubernetes.container.hash: ecf76616,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5460150a2fee8086d52a326ac54ea685b608f48471759d6268917322e4aa38d3,PodSandboxId:5d6221697b2798582f8c5d189f1bf1b3372b25cba9c17d3ef7e345aea540944e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1628635863329161820,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9540d6e2-c97c-415d-b6be-20e310fbd70f,},Annotations:map[string]string{io.kubernetes.container.hash: 732f1ed2,io.kubernetes.container.restartCount: 3,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d9178227b406d73f10966c0ec1378aba7c4b728f22785fb68f48edaeb13e31d,PodSandboxId:51ebf16de90b214fe88bd71ea32d688a4e2b2b7b9c164da070627206dd35a1da,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:1951ff35852d301efa66f57928db5d5caa654068e5c6ea4d7b1c039dad1000d2,State:CONTAINER_RUNNING,CreatedAt:1628635863274171975,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6955765f44-9gbh9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8abdfb71-f1fb-4e43-a76f-55549541d3f5,},Annotations:map[string]string{io.kubernetes.container.hash: f9943294,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c99a52e5c8b3eb6ccf886219ddd66b283ac9b42d473602412b6db9e7ddb40ce,PodSandboxId:582d428e2ea5444151da095990afee1505e245275e50a320d3268353cf8c5dd2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d109c0821a2b9225b69b99a95000df5cd1de5d606bc187b3620d730d7769c6ad,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:f1b432e0ed98f3e2792cf81d6eaa62dc4bf37dd2efd9000b21d206e891ab1fb4,State:CONTAINER_RUNNING,CreatedAt:1628635856304242792,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-20210810224820-30291,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 29b5a3494fd7c53351d2b61e9b662a3a,},Annotations:map[string]string{io.kubernetes.container.hash: e3642d59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e805744318b0952975f4eabe512214832e121a2caf56cdfb9a0fff7697b3131,PodSandboxId:b1a3bfcf9c6347c38047f5a72241e608e61d18a072f4947a076ef66eb915c13a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b0f1517c1f4bb153597033d2efd81a9ac630e6a569307f993b2c0368afcf0302,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:4be02aa0403799a8de33152a8e495114960f0ace9c1dacebc52df978f58f3555,State:CONTAINER_RUNNING,CreatedAt:1628635856141920729,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-20210810224820-30291,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: c7178d8492f798ee160e507a1f6158eb,},Annotations:map[string]string{io.kubernetes.container.hash: 313be8e8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:225594d64682e26b96c0cce3458f20b8fada934b3d0a3c56e60f9bfd52f3dd75,PodSandboxId:32fb6a73a8e5a533e5fc8933f6e4b875deeca190a7f917a4e754cf94d9b8bb4a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90d27391b7808cde8d9a81cfa43b1e81de5c4912b4b52a7dccb19eb4fe3c236b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:692cc58967c316106ce83ca6a5c6617c2226126d44931e9372450364ad64c97b,State:CONTAINER_RUNNING,CreatedAt:1628635856113699504,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-20210810224820-30291,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 208e7765a66a346c75b6f7586d2fa387,},Annotations:map[string]string{io.kubernetes.container.hash: bab94b4f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19fe0b5a36f466184cf564cc711e40d996218f45ea352bad3c467ec0324f0a18,PodSandboxId:b1771928472232e4da441e292937469cc490a0b7a79c4ce484c7fc7cffbe8cf6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:cfbff4b3797c497fcb2b550d8ce345151f79f7e650212cd63bf2dc3cd5428274,State:CONTAINER_RUNNING,CreatedAt:1628635855826048341,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-20210810224820-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 23fac27a2c6b2f58cd70a4c9dc8d0da6,},Annotations:map[string]string{io.kubernetes.container.hash: f1d80633,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9f131c8976914b86f7ef77a7914c676182f1e4fa608de3ce864138068cb90c3,PodSandboxId:5d6221697b2798582f8c5d189f1bf1b3372b25cba9c17d3ef7e345aea540944e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1628635830604915673,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9540d6e2-c97c-415d-b6be-20e310fbd70f,},Annotations:map[string]string{io.kubernetes.container.hash: 732f1ed2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41003a20744ba44844bac954b01f50201088f18d7455625552f316ad5bf12024,PodSandboxId:51ebf16de90b214fe88bd71ea32d688a4e2b2b7b9c164da070627206dd35a1da,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,Annotations:map[string]string{},},ImageRef:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,State:CONTAINER_EXITED,CreatedAt:1628635813574363514,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6955765f44-9gbh9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8abdfb71-f1fb-4e43-a76f-55549541d3f5,},Annotations:map[string]strin
g{io.kubernetes.container.hash: f9943294,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e1d34a786ab389aa821b4f9c92713d5556d250d23142b50dfc40e1fbc8965c4,PodSandboxId:8817e7bc2fbab931abbf90e13416d071181d83b00531455160b6a7cc899b2088,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,Annotations:map[string]string{},},ImageRef:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,State:CONTAINER_EXITED,CreatedAt:1628635812356596398,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-zvm9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26937be5-e3e9-4284-a360-e84c2f175b4f,},Annotations:map[string]string{io.kubernetes.container.hash: ecf76616,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edde24c1a9c539830d810107febdd0ceaffbf9652c9cd6fa9b4faa07af471a26,PodSandboxId:b1771928472232e4da441e292937469cc490a0b7a79c4ce484c7fc7cffbe8cf6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,State:CONTAINER_EXITED,CreatedAt:1628635788431831211,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-20210810224820-30291,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 23fac27a2c6b2f58cd70a4c9dc8d0da6,},Annotations:map[string]string{io.kubernetes.container.hash: f1d80633,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9029fc5bb880e165bc23810305ee69777ad51357946b544b0960039cfc7e8870,PodSandboxId:b231d093e88f22e3b7214d52cd57d587762a584f1cc852d9d3e226930f794bab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056,Annotations:map[string]string{},},ImageRef:5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056,State:CONTAINER_EXITED,CreatedAt:1628635787343427526,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-20210810224820-30291,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 603b914543a305bf066dc8de01ce2232,},Annotations:map[string]string{io.kubernetes.container.hash: 589bcd22,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04efcb33acc3b6322ec09f6ab03cc72fb4eef3851be315661768b2c28b7580ca,PodSandboxId:f379dea9d5ffcac7e24e23f2e0dab8b1fc4800f05db9951983bce94277e3379a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2,Annotations:map[string]string{},},ImageRef:0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2,State:CONTAINER_EXITED,CreatedAt:1628635787160840275,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-20210810224820-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef1fd623bd23
c3b480f66cfc795fa0ab,},Annotations:map[string]string{io.kubernetes.container.hash: 37ef62f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c53eed59215f6778e54e6b1db239641870d0f6cbb1005d78553d81db5c7c652d,PodSandboxId:8129021bbf4ea8e5b5171ea9104ac69b5e520e6eb2031086365e913888c40529,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28,Annotations:map[string]string{},},ImageRef:78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28,State:CONTAINER_EXITED,CreatedAt:1628635787124721607,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-20210810224820-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb577061a17ad23cfbbf52e9419bf32a,},Annotations
:map[string]string{io.kubernetes.container.hash: 99930feb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7c637dc9-847f-456d-93fd-1e8032bdd2f2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:51:09 test-preload-20210810224820-30291 crio[7740]: time="2021-08-10 22:51:09.825799505Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c56dc77e-3cd3-4596-acf3-deeaf80b5629 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:51:09 test-preload-20210810224820-30291 crio[7740]: time="2021-08-10 22:51:09.825852847Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c56dc77e-3cd3-4596-acf3-deeaf80b5629 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:51:09 test-preload-20210810224820-30291 crio[7740]: time="2021-08-10 22:51:09.826229576Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:71fb588b5749d72c0be2a19a31e081cdd39c03185a093084126222ea78b43f8e,PodSandboxId:8817e7bc2fbab931abbf90e13416d071181d83b00531455160b6a7cc899b2088,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:e5cfb363c7caf31fd5a2a3dbd37b6bfc96099ce20209de8e6f55e00ae7ff56c7,State:CONTAINER_RUNNING,CreatedAt:1628635863755333775,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zvm9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26937be5-e3e9-4284-a360-e84c2f175b4f,},Annotations:map[string]string{io.kubernetes.container.hash: ecf76616,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5460150a2fee8086d52a326ac54ea685b608f48471759d6268917322e4aa38d3,PodSandboxId:5d6221697b2798582f8c5d189f1bf1b3372b25cba9c17d3ef7e345aea540944e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1628635863329161820,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9540d6e2-c97c-415d-b6be-20e310fbd70f,},Annotations:map[string]string{io.kubernetes.container.hash: 732f1ed2,io.kubernetes.container.restartCount: 3,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d9178227b406d73f10966c0ec1378aba7c4b728f22785fb68f48edaeb13e31d,PodSandboxId:51ebf16de90b214fe88bd71ea32d688a4e2b2b7b9c164da070627206dd35a1da,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:1951ff35852d301efa66f57928db5d5caa654068e5c6ea4d7b1c039dad1000d2,State:CONTAINER_RUNNING,CreatedAt:1628635863274171975,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6955765f44-9gbh9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8abdfb71-f1fb-4e43-a76f-55549541d3f5,},Annotations:map[string]string{io.kubernetes.container.hash: f9943294,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c99a52e5c8b3eb6ccf886219ddd66b283ac9b42d473602412b6db9e7ddb40ce,PodSandboxId:582d428e2ea5444151da095990afee1505e245275e50a320d3268353cf8c5dd2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d109c0821a2b9225b69b99a95000df5cd1de5d606bc187b3620d730d7769c6ad,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:f1b432e0ed98f3e2792cf81d6eaa62dc4bf37dd2efd9000b21d206e891ab1fb4,State:CONTAINER_RUNNING,CreatedAt:1628635856304242792,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-20210810224820-30291,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 29b5a3494fd7c53351d2b61e9b662a3a,},Annotations:map[string]string{io.kubernetes.container.hash: e3642d59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e805744318b0952975f4eabe512214832e121a2caf56cdfb9a0fff7697b3131,PodSandboxId:b1a3bfcf9c6347c38047f5a72241e608e61d18a072f4947a076ef66eb915c13a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b0f1517c1f4bb153597033d2efd81a9ac630e6a569307f993b2c0368afcf0302,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:4be02aa0403799a8de33152a8e495114960f0ace9c1dacebc52df978f58f3555,State:CONTAINER_RUNNING,CreatedAt:1628635856141920729,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-20210810224820-30291,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: c7178d8492f798ee160e507a1f6158eb,},Annotations:map[string]string{io.kubernetes.container.hash: 313be8e8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:225594d64682e26b96c0cce3458f20b8fada934b3d0a3c56e60f9bfd52f3dd75,PodSandboxId:32fb6a73a8e5a533e5fc8933f6e4b875deeca190a7f917a4e754cf94d9b8bb4a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90d27391b7808cde8d9a81cfa43b1e81de5c4912b4b52a7dccb19eb4fe3c236b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:692cc58967c316106ce83ca6a5c6617c2226126d44931e9372450364ad64c97b,State:CONTAINER_RUNNING,CreatedAt:1628635856113699504,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-20210810224820-30291,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 208e7765a66a346c75b6f7586d2fa387,},Annotations:map[string]string{io.kubernetes.container.hash: bab94b4f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19fe0b5a36f466184cf564cc711e40d996218f45ea352bad3c467ec0324f0a18,PodSandboxId:b1771928472232e4da441e292937469cc490a0b7a79c4ce484c7fc7cffbe8cf6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:cfbff4b3797c497fcb2b550d8ce345151f79f7e650212cd63bf2dc3cd5428274,State:CONTAINER_RUNNING,CreatedAt:1628635855826048341,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-20210810224820-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 23fac27a2c6b2f58cd70a4c9dc8d0da6,},Annotations:map[string]string{io.kubernetes.container.hash: f1d80633,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9f131c8976914b86f7ef77a7914c676182f1e4fa608de3ce864138068cb90c3,PodSandboxId:5d6221697b2798582f8c5d189f1bf1b3372b25cba9c17d3ef7e345aea540944e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1628635830604915673,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9540d6e2-c97c-415d-b6be-20e310fbd70f,},Annotations:map[string]string{io.kubernetes.container.hash: 732f1ed2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41003a20744ba44844bac954b01f50201088f18d7455625552f316ad5bf12024,PodSandboxId:51ebf16de90b214fe88bd71ea32d688a4e2b2b7b9c164da070627206dd35a1da,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,Annotations:map[string]string{},},ImageRef:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,State:CONTAINER_EXITED,CreatedAt:1628635813574363514,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6955765f44-9gbh9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8abdfb71-f1fb-4e43-a76f-55549541d3f5,},Annotations:map[string]strin
g{io.kubernetes.container.hash: f9943294,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e1d34a786ab389aa821b4f9c92713d5556d250d23142b50dfc40e1fbc8965c4,PodSandboxId:8817e7bc2fbab931abbf90e13416d071181d83b00531455160b6a7cc899b2088,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,Annotations:map[string]string{},},ImageRef:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,State:CONTAINER_EXITED,CreatedAt:1628635812356596398,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-zvm9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26937be5-e3e9-4284-a360-e84c2f175b4f,},Annotations:map[string]string{io.kubernetes.container.hash: ecf76616,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edde24c1a9c539830d810107febdd0ceaffbf9652c9cd6fa9b4faa07af471a26,PodSandboxId:b1771928472232e4da441e292937469cc490a0b7a79c4ce484c7fc7cffbe8cf6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,State:CONTAINER_EXITED,CreatedAt:1628635788431831211,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-20210810224820-30291,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 23fac27a2c6b2f58cd70a4c9dc8d0da6,},Annotations:map[string]string{io.kubernetes.container.hash: f1d80633,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9029fc5bb880e165bc23810305ee69777ad51357946b544b0960039cfc7e8870,PodSandboxId:b231d093e88f22e3b7214d52cd57d587762a584f1cc852d9d3e226930f794bab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056,Annotations:map[string]string{},},ImageRef:5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056,State:CONTAINER_EXITED,CreatedAt:1628635787343427526,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-20210810224820-30291,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 603b914543a305bf066dc8de01ce2232,},Annotations:map[string]string{io.kubernetes.container.hash: 589bcd22,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04efcb33acc3b6322ec09f6ab03cc72fb4eef3851be315661768b2c28b7580ca,PodSandboxId:f379dea9d5ffcac7e24e23f2e0dab8b1fc4800f05db9951983bce94277e3379a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2,Annotations:map[string]string{},},ImageRef:0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2,State:CONTAINER_EXITED,CreatedAt:1628635787160840275,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-20210810224820-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef1fd623bd23
c3b480f66cfc795fa0ab,},Annotations:map[string]string{io.kubernetes.container.hash: 37ef62f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c53eed59215f6778e54e6b1db239641870d0f6cbb1005d78553d81db5c7c652d,PodSandboxId:8129021bbf4ea8e5b5171ea9104ac69b5e520e6eb2031086365e913888c40529,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28,Annotations:map[string]string{},},ImageRef:78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28,State:CONTAINER_EXITED,CreatedAt:1628635787124721607,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-20210810224820-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb577061a17ad23cfbbf52e9419bf32a,},Annotations
:map[string]string{io.kubernetes.container.hash: 99930feb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c56dc77e-3cd3-4596-acf3-deeaf80b5629 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:51:09 test-preload-20210810224820-30291 crio[7740]: time="2021-08-10 22:51:09.867064140Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=54b77622-6cb2-4ade-a2b9-65e3e2e78890 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:51:09 test-preload-20210810224820-30291 crio[7740]: time="2021-08-10 22:51:09.867182843Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=54b77622-6cb2-4ade-a2b9-65e3e2e78890 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 10 22:51:09 test-preload-20210810224820-30291 crio[7740]: time="2021-08-10 22:51:09.867489552Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:71fb588b5749d72c0be2a19a31e081cdd39c03185a093084126222ea78b43f8e,PodSandboxId:8817e7bc2fbab931abbf90e13416d071181d83b00531455160b6a7cc899b2088,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:e5cfb363c7caf31fd5a2a3dbd37b6bfc96099ce20209de8e6f55e00ae7ff56c7,State:CONTAINER_RUNNING,CreatedAt:1628635863755333775,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zvm9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26937be5-e3e9-4284-a360-e84c2f175b4f,},Annotations:map[string]string{io.kubernetes.container.hash: ecf76616,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5460150a2fee8086d52a326ac54ea685b608f48471759d6268917322e4aa38d3,PodSandboxId:5d6221697b2798582f8c5d189f1bf1b3372b25cba9c17d3ef7e345aea540944e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1628635863329161820,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9540d6e2-c97c-415d-b6be-20e310fbd70f,},Annotations:map[string]string{io.kubernetes.container.hash: 732f1ed2,io.kubernetes.container.restartCount: 3,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d9178227b406d73f10966c0ec1378aba7c4b728f22785fb68f48edaeb13e31d,PodSandboxId:51ebf16de90b214fe88bd71ea32d688a4e2b2b7b9c164da070627206dd35a1da,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:1951ff35852d301efa66f57928db5d5caa654068e5c6ea4d7b1c039dad1000d2,State:CONTAINER_RUNNING,CreatedAt:1628635863274171975,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6955765f44-9gbh9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8abdfb71-f1fb-4e43-a76f-55549541d3f5,},Annotations:map[string]string{io.kubernetes.container.hash: f9943294,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c99a52e5c8b3eb6ccf886219ddd66b283ac9b42d473602412b6db9e7ddb40ce,PodSandboxId:582d428e2ea5444151da095990afee1505e245275e50a320d3268353cf8c5dd2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d109c0821a2b9225b69b99a95000df5cd1de5d606bc187b3620d730d7769c6ad,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:f1b432e0ed98f3e2792cf81d6eaa62dc4bf37dd2efd9000b21d206e891ab1fb4,State:CONTAINER_RUNNING,CreatedAt:1628635856304242792,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-20210810224820-30291,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 29b5a3494fd7c53351d2b61e9b662a3a,},Annotations:map[string]string{io.kubernetes.container.hash: e3642d59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e805744318b0952975f4eabe512214832e121a2caf56cdfb9a0fff7697b3131,PodSandboxId:b1a3bfcf9c6347c38047f5a72241e608e61d18a072f4947a076ef66eb915c13a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b0f1517c1f4bb153597033d2efd81a9ac630e6a569307f993b2c0368afcf0302,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:4be02aa0403799a8de33152a8e495114960f0ace9c1dacebc52df978f58f3555,State:CONTAINER_RUNNING,CreatedAt:1628635856141920729,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-20210810224820-30291,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: c7178d8492f798ee160e507a1f6158eb,},Annotations:map[string]string{io.kubernetes.container.hash: 313be8e8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:225594d64682e26b96c0cce3458f20b8fada934b3d0a3c56e60f9bfd52f3dd75,PodSandboxId:32fb6a73a8e5a533e5fc8933f6e4b875deeca190a7f917a4e754cf94d9b8bb4a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90d27391b7808cde8d9a81cfa43b1e81de5c4912b4b52a7dccb19eb4fe3c236b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:692cc58967c316106ce83ca6a5c6617c2226126d44931e9372450364ad64c97b,State:CONTAINER_RUNNING,CreatedAt:1628635856113699504,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-20210810224820-30291,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 208e7765a66a346c75b6f7586d2fa387,},Annotations:map[string]string{io.kubernetes.container.hash: bab94b4f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19fe0b5a36f466184cf564cc711e40d996218f45ea352bad3c467ec0324f0a18,PodSandboxId:b1771928472232e4da441e292937469cc490a0b7a79c4ce484c7fc7cffbe8cf6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:cfbff4b3797c497fcb2b550d8ce345151f79f7e650212cd63bf2dc3cd5428274,State:CONTAINER_RUNNING,CreatedAt:1628635855826048341,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-20210810224820-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 23fac27a2c6b2f58cd70a4c9dc8d0da6,},Annotations:map[string]string{io.kubernetes.container.hash: f1d80633,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9f131c8976914b86f7ef77a7914c676182f1e4fa608de3ce864138068cb90c3,PodSandboxId:5d6221697b2798582f8c5d189f1bf1b3372b25cba9c17d3ef7e345aea540944e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1628635830604915673,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9540d6e2-c97c-415d-b6be-20e310fbd70f,},Annotations:map[string]string{io.kubernetes.container.hash: 732f1ed2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41003a20744ba44844bac954b01f50201088f18d7455625552f316ad5bf12024,PodSandboxId:51ebf16de90b214fe88bd71ea32d688a4e2b2b7b9c164da070627206dd35a1da,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,Annotations:map[string]string{},},ImageRef:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,State:CONTAINER_EXITED,CreatedAt:1628635813574363514,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6955765f44-9gbh9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8abdfb71-f1fb-4e43-a76f-55549541d3f5,},Annotations:map[string]strin
g{io.kubernetes.container.hash: f9943294,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e1d34a786ab389aa821b4f9c92713d5556d250d23142b50dfc40e1fbc8965c4,PodSandboxId:8817e7bc2fbab931abbf90e13416d071181d83b00531455160b6a7cc899b2088,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,Annotations:map[string]string{},},ImageRef:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,State:CONTAINER_EXITED,CreatedAt:1628635812356596398,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-zvm9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26937be5-e3e9-4284-a360-e84c2f175b4f,},Annotations:map[string]string{io.kubernetes.container.hash: ecf76616,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edde24c1a9c539830d810107febdd0ceaffbf9652c9cd6fa9b4faa07af471a26,PodSandboxId:b1771928472232e4da441e292937469cc490a0b7a79c4ce484c7fc7cffbe8cf6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,State:CONTAINER_EXITED,CreatedAt:1628635788431831211,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-20210810224820-30291,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 23fac27a2c6b2f58cd70a4c9dc8d0da6,},Annotations:map[string]string{io.kubernetes.container.hash: f1d80633,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9029fc5bb880e165bc23810305ee69777ad51357946b544b0960039cfc7e8870,PodSandboxId:b231d093e88f22e3b7214d52cd57d587762a584f1cc852d9d3e226930f794bab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056,Annotations:map[string]string{},},ImageRef:5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056,State:CONTAINER_EXITED,CreatedAt:1628635787343427526,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-20210810224820-30291,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 603b914543a305bf066dc8de01ce2232,},Annotations:map[string]string{io.kubernetes.container.hash: 589bcd22,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04efcb33acc3b6322ec09f6ab03cc72fb4eef3851be315661768b2c28b7580ca,PodSandboxId:f379dea9d5ffcac7e24e23f2e0dab8b1fc4800f05db9951983bce94277e3379a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2,Annotations:map[string]string{},},ImageRef:0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2,State:CONTAINER_EXITED,CreatedAt:1628635787160840275,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-20210810224820-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef1fd623bd23
c3b480f66cfc795fa0ab,},Annotations:map[string]string{io.kubernetes.container.hash: 37ef62f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c53eed59215f6778e54e6b1db239641870d0f6cbb1005d78553d81db5c7c652d,PodSandboxId:8129021bbf4ea8e5b5171ea9104ac69b5e520e6eb2031086365e913888c40529,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28,Annotations:map[string]string{},},ImageRef:78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28,State:CONTAINER_EXITED,CreatedAt:1628635787124721607,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-20210810224820-30291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb577061a17ad23cfbbf52e9419bf32a,},Annotations
:map[string]string{io.kubernetes.container.hash: 99930feb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=54b77622-6cb2-4ade-a2b9-65e3e2e78890 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID
	71fb588b5749d       7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19   6 seconds ago        Running             kube-proxy                1                   8817e7bc2fbab
	5460150a2fee8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   6 seconds ago        Running             storage-provisioner       3                   5d6221697b279
	7d9178227b406       70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61   6 seconds ago        Running             coredns                   1                   51ebf16de90b2
	1c99a52e5c8b3       d109c0821a2b9225b69b99a95000df5cd1de5d606bc187b3620d730d7769c6ad   13 seconds ago       Running             kube-scheduler            0                   582d428e2ea54
	4e805744318b0       b0f1517c1f4bb153597033d2efd81a9ac630e6a569307f993b2c0368afcf0302   13 seconds ago       Running             kube-controller-manager   0                   b1a3bfcf9c634
	225594d64682e       90d27391b7808cde8d9a81cfa43b1e81de5c4912b4b52a7dccb19eb4fe3c236b   13 seconds ago       Running             kube-apiserver            0                   32fb6a73a8e5a
	19fe0b5a36f46       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f   14 seconds ago       Running             etcd                      1                   b177192847223
	b9f131c897691       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   39 seconds ago       Exited              storage-provisioner       2                   5d6221697b279
	41003a20744ba       70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61   56 seconds ago       Exited              coredns                   0                   51ebf16de90b2
	4e1d34a786ab3       7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19   57 seconds ago       Exited              kube-proxy                0                   8817e7bc2fbab
	edde24c1a9c53       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f   About a minute ago   Exited              etcd                      0                   b177192847223
	9029fc5bb880e       5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056   About a minute ago   Exited              kube-controller-manager   0                   b231d093e88f2
	04efcb33acc3b       0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2   About a minute ago   Exited              kube-apiserver            0                   f379dea9d5ffc
	c53eed59215f6       78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28   About a minute ago   Exited              kube-scheduler            0                   8129021bbf4ea
	
	* 
	* ==> coredns [41003a20744ba44844bac954b01f50201088f18d7455625552f316ad5bf12024] <==
	* E0810 22:50:16.445022       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
	E0810 22:50:16.445022       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
	E0810 22:50:16.445022       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
	E0810 22:50:17.441259       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
	E0810 22:50:17.441259       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
	E0810 22:50:17.441259       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
	E0810 22:50:17.445846       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
	E0810 22:50:17.445846       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
	E0810 22:50:17.445846       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
	E0810 22:50:17.446806       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
	E0810 22:50:17.446806       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
	E0810 22:50:17.446806       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
	CoreDNS-1.6.5
	linux/amd64, go1.13.4, c2fd1b2
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = 83129990f9b0fc4324a2ced50c8d40af
	[INFO] Reloading complete
	E0810 22:50:16.444164       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
	E0810 22:50:16.445022       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
	E0810 22:50:17.441259       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
	E0810 22:50:17.445846       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
	E0810 22:50:17.446806       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> coredns [7d9178227b406d73f10966c0ec1378aba7c4b728f22785fb68f48edaeb13e31d] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = 83129990f9b0fc4324a2ced50c8d40af
	CoreDNS-1.6.5
	linux/amd64, go1.13.4, c2fd1b2
	
	* 
	* ==> describe nodes <==
	* Name:               test-preload-20210810224820-30291
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-20210810224820-30291
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=877a5691753f15214a0c269ac69dcdc5a4d99fcd
	                    minikube.k8s.io/name=test-preload-20210810224820-30291
	                    minikube.k8s.io/updated_at=2021_08_10T22_49_56_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Aug 2021 22:49:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-20210810224820-30291
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Aug 2021 22:51:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Aug 2021 22:51:01 +0000   Tue, 10 Aug 2021 22:49:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Aug 2021 22:51:01 +0000   Tue, 10 Aug 2021 22:49:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Aug 2021 22:51:01 +0000   Tue, 10 Aug 2021 22:49:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Aug 2021 22:51:01 +0000   Tue, 10 Aug 2021 22:50:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.38
	  Hostname:    test-preload-20210810224820-30291
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2186320Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2186320Ki
	  pods:               110
	System Info:
	  Machine ID:                 a4689e819b85407bba0dc0d6cd5fae15
	  System UUID:                a4689e81-9b85-407b-ba0d-c0d6cd5fae15
	  Boot ID:                    1f7accb5-df76-41dc-bb9a-bd3ac25aef71
	  Kernel Version:             4.19.182
	  OS Image:                   Buildroot 2020.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.20.2
	  Kubelet Version:            v1.17.3
	  Kube-Proxy Version:         v1.17.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6955765f44-9gbh9                                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (7%!)(MISSING)     60s
	  kube-system                 etcd-test-preload-20210810224820-30291                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	  kube-system                 kube-apiserver-test-preload-20210810224820-30291             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  kube-system                 kube-controller-manager-test-preload-20210810224820-30291    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  kube-system                 kube-proxy-zvm9g                                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         59s
	  kube-system                 kube-scheduler-test-preload-20210810224820-30291             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  kube-system                 storage-provisioner                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (3%!)(MISSING)   170Mi (7%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                           Message
	  ----    ------                   ----               ----                                           -------
	  Normal  Starting                 74s                kubelet, test-preload-20210810224820-30291     Starting kubelet.
	  Normal  NodeHasSufficientMemory  74s                kubelet, test-preload-20210810224820-30291     Node test-preload-20210810224820-30291 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    74s                kubelet, test-preload-20210810224820-30291     Node test-preload-20210810224820-30291 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     74s                kubelet, test-preload-20210810224820-30291     Node test-preload-20210810224820-30291 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  74s                kubelet, test-preload-20210810224820-30291     Updated Node Allocatable limit across pods
	  Normal  NodeReady                64s                kubelet, test-preload-20210810224820-30291     Node test-preload-20210810224820-30291 status is now: NodeReady
	  Normal  Starting                 53s                kube-proxy, test-preload-20210810224820-30291  Starting kube-proxy.
	  Normal  Starting                 16s                kubelet, test-preload-20210810224820-30291     Starting kubelet.
	  Normal  NodeHasSufficientMemory  16s (x8 over 16s)  kubelet, test-preload-20210810224820-30291     Node test-preload-20210810224820-30291 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16s (x7 over 16s)  kubelet, test-preload-20210810224820-30291     Node test-preload-20210810224820-30291 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16s (x8 over 16s)  kubelet, test-preload-20210810224820-30291     Node test-preload-20210810224820-30291 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16s                kubelet, test-preload-20210810224820-30291     Updated Node Allocatable limit across pods
	  Normal  Starting                 7s                 kube-proxy, test-preload-20210810224820-30291  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +4.767653] Unstable clock detected, switching default tracing clock to "global"
	              If you want to keep using the local clock, then add:
	                "trace_clock=local"
	              on the kernel command line
	[  +0.000023] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +4.024535] systemd-fstab-generator[1160]: Ignoring "noauto" for root device
	[  +0.041446] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000004] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +1.089430] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1717 comm=systemd-network
	[  +1.231889] vboxguest: loading out-of-tree module taints kernel.
	[  +0.006096] vboxguest: PCI device not found, probably running on physical hardware.
	[  +2.016993] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[  +5.337025] systemd-fstab-generator[2142]: Ignoring "noauto" for root device
	[  +0.136178] systemd-fstab-generator[2155]: Ignoring "noauto" for root device
	[  +0.188433] systemd-fstab-generator[2181]: Ignoring "noauto" for root device
	[Aug10 22:49] systemd-fstab-generator[3313]: Ignoring "noauto" for root device
	[ +14.312998] systemd-fstab-generator[3745]: Ignoring "noauto" for root device
	[Aug10 22:50] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.000423] kauditd_printk_skb: 14 callbacks suppressed
	[ +11.683999] systemd-fstab-generator[7983]: Ignoring "noauto" for root device
	[  +0.202423] systemd-fstab-generator[7996]: Ignoring "noauto" for root device
	[  +0.243156] systemd-fstab-generator[8017]: Ignoring "noauto" for root device
	[  +8.425709] NFSD: Unable to end grace period: -110
	[ +15.415460] systemd-fstab-generator[9142]: Ignoring "noauto" for root device
	[Aug10 22:51] kauditd_printk_skb: 35 callbacks suppressed
	
	* 
	* ==> etcd [19fe0b5a36f466184cf564cc711e40d996218f45ea352bad3c467ec0324f0a18] <==
	* 2021-08-10 22:50:56.125288 I | embed: initial advertise peer URLs = https://192.168.50.38:2380
	2021-08-10 22:50:56.125293 I | embed: initial cluster = 
	2021-08-10 22:50:56.140176 I | etcdserver: restarting member 21c1ddd48015c0d4 in cluster 8f09f9d2d10c62aa at commit index 431
	raft2021/08/10 22:50:56 INFO: 21c1ddd48015c0d4 switched to configuration voters=()
	raft2021/08/10 22:50:56 INFO: 21c1ddd48015c0d4 became follower at term 2
	raft2021/08/10 22:50:56 INFO: newRaft 21c1ddd48015c0d4 [peers: [], term: 2, commit: 431, applied: 0, lastindex: 431, lastterm: 2]
	2021-08-10 22:50:56.158159 W | auth: simple token is not cryptographically signed
	2021-08-10 22:50:56.160570 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	raft2021/08/10 22:50:56 INFO: 21c1ddd48015c0d4 switched to configuration voters=(2432469178508493012)
	2021-08-10 22:50:56.163585 I | etcdserver/membership: added member 21c1ddd48015c0d4 [https://192.168.50.38:2380] to cluster 8f09f9d2d10c62aa
	2021-08-10 22:50:56.163919 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-10 22:50:56.164006 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-10 22:50:56.167134 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-10 22:50:56.167853 I | embed: listening for metrics on http://127.0.0.1:2381
	2021-08-10 22:50:56.168086 I | embed: listening for peers on 192.168.50.38:2380
	raft2021/08/10 22:50:57 INFO: 21c1ddd48015c0d4 is starting a new election at term 2
	raft2021/08/10 22:50:57 INFO: 21c1ddd48015c0d4 became candidate at term 3
	raft2021/08/10 22:50:57 INFO: 21c1ddd48015c0d4 received MsgVoteResp from 21c1ddd48015c0d4 at term 3
	raft2021/08/10 22:50:57 INFO: 21c1ddd48015c0d4 became leader at term 3
	raft2021/08/10 22:50:57 INFO: raft.node: 21c1ddd48015c0d4 elected leader 21c1ddd48015c0d4 at term 3
	2021-08-10 22:50:57.942271 I | etcdserver: published {Name:test-preload-20210810224820-30291 ClientURLs:[https://192.168.50.38:2379]} to cluster 8f09f9d2d10c62aa
	2021-08-10 22:50:57.942529 I | embed: ready to serve client requests
	2021-08-10 22:50:57.943128 I | embed: ready to serve client requests
	2021-08-10 22:50:57.945090 I | embed: serving client requests on 192.168.50.38:2379
	2021-08-10 22:50:57.945463 I | embed: serving client requests on 127.0.0.1:2379
	
	* 
	* ==> etcd [edde24c1a9c539830d810107febdd0ceaffbf9652c9cd6fa9b4faa07af471a26] <==
	* raft2021/08/10 22:49:48 INFO: newRaft 21c1ddd48015c0d4 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2021/08/10 22:49:48 INFO: 21c1ddd48015c0d4 became follower at term 1
	raft2021/08/10 22:49:48 INFO: 21c1ddd48015c0d4 switched to configuration voters=(2432469178508493012)
	2021-08-10 22:49:48.522116 W | auth: simple token is not cryptographically signed
	2021-08-10 22:49:48.526126 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2021-08-10 22:49:48.529184 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-10 22:49:48.530342 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2021/08/10 22:49:48 INFO: 21c1ddd48015c0d4 switched to configuration voters=(2432469178508493012)
	2021-08-10 22:49:48.530751 I | etcdserver/membership: added member 21c1ddd48015c0d4 [https://192.168.50.38:2380] to cluster 8f09f9d2d10c62aa
	2021-08-10 22:49:48.530886 I | etcdserver: 21c1ddd48015c0d4 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2021-08-10 22:49:48.531011 I | embed: listening for peers on 192.168.50.38:2380
	raft2021/08/10 22:49:49 INFO: 21c1ddd48015c0d4 is starting a new election at term 1
	raft2021/08/10 22:49:49 INFO: 21c1ddd48015c0d4 became candidate at term 2
	raft2021/08/10 22:49:49 INFO: 21c1ddd48015c0d4 received MsgVoteResp from 21c1ddd48015c0d4 at term 2
	raft2021/08/10 22:49:49 INFO: 21c1ddd48015c0d4 became leader at term 2
	raft2021/08/10 22:49:49 INFO: raft.node: 21c1ddd48015c0d4 elected leader 21c1ddd48015c0d4 at term 2
	2021-08-10 22:49:49.117117 I | etcdserver: setting up the initial cluster version to 3.4
	2021-08-10 22:49:49.118125 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-10 22:49:49.118224 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-10 22:49:49.118277 I | etcdserver: published {Name:test-preload-20210810224820-30291 ClientURLs:[https://192.168.50.38:2379]} to cluster 8f09f9d2d10c62aa
	2021-08-10 22:49:49.119114 I | embed: ready to serve client requests
	2021-08-10 22:49:49.122856 I | embed: ready to serve client requests
	2021-08-10 22:49:49.123370 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-10 22:49:49.130014 I | embed: serving client requests on 192.168.50.38:2379
	2021-08-10 22:50:14.254893 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-6955765f44-9gbh9\" " with result "range_response_count:1 size:1705" took too long (397.068482ms) to execute
	
	* 
	* ==> kernel <==
	*  22:51:10 up 2 min,  0 users,  load average: 1.88, 0.85, 0.33
	Linux test-preload-20210810224820-30291 4.19.182 #1 SMP Fri Aug 6 09:11:32 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [04efcb33acc3b6322ec09f6ab03cc72fb4eef3851be315661768b2c28b7580ca] <==
	* W0810 22:50:52.172914       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0810 22:50:52.173032       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0810 22:50:52.173060       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0810 22:50:52.173089       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0810 22:50:52.173117       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0810 22:50:52.173233       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0810 22:50:52.173260       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0810 22:50:52.173374       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0810 22:50:52.173405       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0810 22:50:52.173442       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0810 22:50:52.173560       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0810 22:50:52.173598       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0810 22:50:52.173714       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0810 22:50:52.173835       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0810 22:50:52.173868       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0810 22:50:52.174324       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0810 22:50:52.174466       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0810 22:50:52.174493       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0810 22:50:52.174518       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0810 22:50:52.174725       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0810 22:50:52.174975       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0810 22:50:52.175018       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0810 22:50:52.175062       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0810 22:50:52.175123       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0810 22:50:52.175419       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	* 
	* ==> kube-apiserver [225594d64682e26b96c0cce3458f20b8fada934b3d0a3c56e60f9bfd52f3dd75] <==
	* I0810 22:51:00.977812       1 shared_informer.go:197] Waiting for caches to sync for crd-autoregister
	I0810 22:51:00.988408       1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	I0810 22:51:00.988478       1 dynamic_cafile_content.go:166] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	I0810 22:51:00.988900       1 controller.go:85] Starting OpenAPI controller
	I0810 22:51:00.988963       1 customresource_discovery_controller.go:208] Starting DiscoveryController
	I0810 22:51:00.989000       1 naming_controller.go:288] Starting NamingConditionController
	I0810 22:51:00.989045       1 establishing_controller.go:73] Starting EstablishingController
	I0810 22:51:00.989083       1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
	I0810 22:51:00.989126       1 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController
	E0810 22:51:01.170955       1 controller.go:151] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0810 22:51:01.174954       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0810 22:51:01.178402       1 cache.go:39] Caches are synced for autoregister controller
	I0810 22:51:01.178855       1 shared_informer.go:204] Caches are synced for cluster_authentication_trust_controller 
	I0810 22:51:01.179870       1 shared_informer.go:204] Caches are synced for crd-autoregister 
	I0810 22:51:01.180039       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0810 22:51:01.209478       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0810 22:51:01.973884       1 controller.go:107] OpenAPI AggregationController: Processing item 
	I0810 22:51:01.974046       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0810 22:51:01.974091       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0810 22:51:01.988907       1 storage_scheduling.go:142] all system priority classes are created successfully or already exist.
	I0810 22:51:02.783412       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0810 22:51:02.926324       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0810 22:51:03.049514       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0810 22:51:03.082591       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0810 22:51:03.094180       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-controller-manager [4e805744318b0952975f4eabe512214832e121a2caf56cdfb9a0fff7697b3131] <==
	* I0810 22:51:03.972131       1 replica_set.go:180] Starting replicationcontroller controller
	I0810 22:51:03.972139       1 shared_informer.go:197] Waiting for caches to sync for ReplicationController
	I0810 22:51:04.122194       1 controllermanager.go:533] Started "daemonset"
	I0810 22:51:04.122332       1 daemon_controller.go:255] Starting daemon sets controller
	I0810 22:51:04.122340       1 shared_informer.go:197] Waiting for caches to sync for daemon sets
	I0810 22:51:04.722859       1 controllermanager.go:533] Started "horizontalpodautoscaling"
	I0810 22:51:04.722956       1 horizontal.go:156] Starting HPA controller
	I0810 22:51:04.722965       1 shared_informer.go:197] Waiting for caches to sync for HPA
	I0810 22:51:04.873255       1 controllermanager.go:533] Started "persistentvolume-binder"
	I0810 22:51:04.873298       1 pv_controller_base.go:294] Starting persistent volume controller
	I0810 22:51:04.873514       1 shared_informer.go:197] Waiting for caches to sync for persistent volume
	I0810 22:51:05.022488       1 controllermanager.go:533] Started "attachdetach"
	I0810 22:51:05.022836       1 attach_detach_controller.go:342] Starting attach detach controller
	I0810 22:51:05.023187       1 shared_informer.go:197] Waiting for caches to sync for attach detach
	I0810 22:51:05.173359       1 controllermanager.go:533] Started "pv-protection"
	I0810 22:51:05.173436       1 pv_protection_controller.go:81] Starting PV protection controller
	I0810 22:51:05.173533       1 shared_informer.go:197] Waiting for caches to sync for PV protection
	W0810 22:51:05.173444       1 controllermanager.go:525] Skipping "ttl-after-finished"
	I0810 22:51:05.322699       1 controllermanager.go:533] Started "serviceaccount"
	I0810 22:51:05.322743       1 serviceaccounts_controller.go:116] Starting service account controller
	I0810 22:51:05.323116       1 shared_informer.go:197] Waiting for caches to sync for service account
	I0810 22:51:05.472212       1 controllermanager.go:533] Started "ttl"
	I0810 22:51:05.472316       1 ttl_controller.go:116] Starting TTL controller
	I0810 22:51:05.472858       1 shared_informer.go:197] Waiting for caches to sync for TTL
	I0810 22:51:05.621271       1 node_ipam_controller.go:94] Sending events to api server.
	
	* 
	* ==> kube-controller-manager [9029fc5bb880e165bc23810305ee69777ad51357946b544b0960039cfc7e8870] <==
	* I0810 22:50:10.944598       1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-6955765f44", UID:"c3c2d2df-a9b3-44ca-bc28-e9953207edda", APIVersion:"apps/v1", ResourceVersion:"305", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-6955765f44-9gbh9
	I0810 22:50:10.954017       1 shared_informer.go:204] Caches are synced for certificate-csrapproving 
	I0810 22:50:10.983754       1 shared_informer.go:204] Caches are synced for certificate-csrsigning 
	I0810 22:50:11.151471       1 shared_informer.go:204] Caches are synced for endpoint 
	I0810 22:50:11.235695       1 shared_informer.go:204] Caches are synced for persistent volume 
	I0810 22:50:11.235907       1 shared_informer.go:204] Caches are synced for taint 
	I0810 22:50:11.235967       1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone: 
	W0810 22:50:11.236133       1 node_lifecycle_controller.go:1058] Missing timestamp for Node test-preload-20210810224820-30291. Assuming now as a timestamp.
	I0810 22:50:11.236178       1 node_lifecycle_controller.go:1259] Controller detected that zone  is now in state Normal.
	I0810 22:50:11.236526       1 taint_manager.go:186] Starting NoExecuteTaintManager
	I0810 22:50:11.236720       1 event.go:281] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"test-preload-20210810224820-30291", UID:"e9e11cae-d70f-47ac-a112-43f218912eb1", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node test-preload-20210810224820-30291 event: Registered Node test-preload-20210810224820-30291 in Controller
	I0810 22:50:11.271818       1 shared_informer.go:204] Caches are synced for daemon sets 
	I0810 22:50:11.285976       1 shared_informer.go:204] Caches are synced for expand 
	I0810 22:50:11.290530       1 event.go:281] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"0bf7c1e1-5faa-4ccd-a209-3af173a67627", APIVersion:"apps/v1", ResourceVersion:"188", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-zvm9g
	I0810 22:50:11.296522       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"7bb3d8bf-5e5b-4ade-a73d-c4cf46f4a982", APIVersion:"apps/v1", ResourceVersion:"327", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-6955765f44 to 1
	I0810 22:50:11.335605       1 shared_informer.go:204] Caches are synced for resource quota 
	I0810 22:50:11.340057       1 shared_informer.go:204] Caches are synced for garbage collector 
	I0810 22:50:11.340154       1 garbagecollector.go:138] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0810 22:50:11.345962       1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-6955765f44", UID:"c3c2d2df-a9b3-44ca-bc28-e9953207edda", APIVersion:"apps/v1", ResourceVersion:"329", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-6955765f44-rp6jt
	E0810 22:50:11.349545       1 daemon_controller.go:290] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"0bf7c1e1-5faa-4ccd-a209-3af173a67627", ResourceVersion:"188", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764232596, loc:(*time.Location)(0x6b951c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001433e80), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Names
pace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeS
ource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc000a678c0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001433ea0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolu
meSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIV
olumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001433ec0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.A
zureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.17.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001433f00)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMo
de)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc000a78910), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001064d08), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"beta.kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServic
eAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0013604e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy
{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0000f05c8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001064d58)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	E0810 22:50:11.379229       1 daemon_controller.go:290] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"0bf7c1e1-5faa-4ccd-a209-3af173a67627", ResourceVersion:"330", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764232596, loc:(*time.Location)(0x6b951c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc000e50800), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Names
pace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeS
ource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0012e8240), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000e50820), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolu
meSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIV
olumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000e50840), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.A
zureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.17.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc000e50880)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMo
de)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0003f2730), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0009f9048), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"beta.kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServic
eAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00115c240), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy
{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00000fa80)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0009f9088)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0810 22:50:11.382824       1 shared_informer.go:204] Caches are synced for resource quota 
	I0810 22:50:11.384786       1 shared_informer.go:204] Caches are synced for garbage collector 
	
	* 
	* ==> kube-proxy [4e1d34a786ab389aa821b4f9c92713d5556d250d23142b50dfc40e1fbc8965c4] <==
	* W0810 22:50:17.889452       1 server_others.go:323] Unknown proxy mode "", assuming iptables proxy
	I0810 22:50:17.904290       1 node.go:135] Successfully retrieved node IP: 192.168.50.38
	I0810 22:50:17.904389       1 server_others.go:145] Using iptables Proxier.
	I0810 22:50:17.905240       1 server.go:571] Version: v1.17.0
	I0810 22:50:17.914109       1 config.go:313] Starting service config controller
	I0810 22:50:17.914218       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0810 22:50:17.914250       1 config.go:131] Starting endpoints config controller
	I0810 22:50:17.914260       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0810 22:50:18.015274       1 shared_informer.go:204] Caches are synced for endpoints config 
	I0810 22:50:18.015285       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-proxy [71fb588b5749d72c0be2a19a31e081cdd39c03185a093084126222ea78b43f8e] <==
	* W0810 22:51:03.942658       1 server_others.go:323] Unknown proxy mode "", assuming iptables proxy
	I0810 22:51:03.963298       1 node.go:135] Successfully retrieved node IP: 192.168.50.38
	I0810 22:51:03.963327       1 server_others.go:145] Using iptables Proxier.
	I0810 22:51:03.964781       1 server.go:571] Version: v1.17.0
	I0810 22:51:03.967917       1 config.go:313] Starting service config controller
	I0810 22:51:03.968019       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0810 22:51:03.968124       1 config.go:131] Starting endpoints config controller
	I0810 22:51:03.968135       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0810 22:51:04.068786       1 shared_informer.go:204] Caches are synced for endpoints config 
	I0810 22:51:04.068947       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [1c99a52e5c8b3eb6ccf886219ddd66b283ac9b42d473602412b6db9e7ddb40ce] <==
	* I0810 22:50:57.557262       1 serving.go:312] Generated self-signed cert in-memory
	W0810 22:50:57.909999       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found
	W0810 22:50:57.910222       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found
	W0810 22:51:01.005847       1 authentication.go:348] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0810 22:51:01.005926       1 authentication.go:296] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0810 22:51:01.006013       1 authentication.go:297] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0810 22:51:01.006063       1 authentication.go:298] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	W0810 22:51:01.125250       1 authorization.go:47] Authorization is disabled
	W0810 22:51:01.125524       1 authentication.go:92] Authentication is disabled
	I0810 22:51:01.125731       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0810 22:51:01.127115       1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0810 22:51:01.127220       1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0810 22:51:01.144915       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0810 22:51:01.144966       1 tlsconfig.go:219] Starting DynamicServingCertificateController
	I0810 22:51:01.227689       1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kube-scheduler [c53eed59215f6778e54e6b1db239641870d0f6cbb1005d78553d81db5c7c652d] <==
	* E0810 22:49:53.524391       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0810 22:49:53.525788       1 reflector.go:156] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0810 22:49:53.526121       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0810 22:49:53.527927       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0810 22:49:53.530510       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0810 22:49:53.531774       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0810 22:49:53.533290       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0810 22:49:53.534815       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0810 22:49:53.535943       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0810 22:49:53.536807       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0810 22:49:53.539193       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0810 22:49:54.611817       1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0810 22:50:10.971847       1 factory.go:494] pod is already present in the activeQ
	E0810 22:50:52.560927       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.PodDisruptionBudget: Get https://control-plane.minikube.internal:8443/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=1&timeout=9m10s&timeoutSeconds=550&watch=true: dial tcp 192.168.50.38:8443: connect: connection refused
	E0810 22:50:52.561063       1 reflector.go:320] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to watch *v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%3DFailed%!C(MISSING)status.phase%3DSucceeded&resourceVersion=397&timeoutSeconds=426&watch=true: dial tcp 192.168.50.38:8443: connect: connection refused
	E0810 22:50:52.561133       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicaSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=388&timeout=5m5s&timeoutSeconds=305&watch=true: dial tcp 192.168.50.38:8443: connect: connection refused
	E0810 22:50:52.561176       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicationController: Get https://control-plane.minikube.internal:8443/api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=1&timeout=5m26s&timeoutSeconds=326&watch=true: dial tcp 192.168.50.38:8443: connect: connection refused
	E0810 22:50:52.561218       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StorageClass: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=352&timeout=7m24s&timeoutSeconds=444&watch=true: dial tcp 192.168.50.38:8443: connect: connection refused
	E0810 22:50:52.561256       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSINode: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csinodes?allowWatchBookmarks=true&resourceVersion=35&timeout=6m0s&timeoutSeconds=360&watch=true: dial tcp 192.168.50.38:8443: connect: connection refused
	E0810 22:50:52.561301       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=325&timeout=9m4s&timeoutSeconds=544&watch=true: dial tcp 192.168.50.38:8443: connect: connection refused
	E0810 22:50:52.561349       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StatefulSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=1&timeout=7m42s&timeoutSeconds=462&watch=true: dial tcp 192.168.50.38:8443: connect: connection refused
	E0810 22:50:52.561397       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolume: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=1&timeout=9m41s&timeoutSeconds=581&watch=true: dial tcp 192.168.50.38:8443: connect: connection refused
	E0810 22:50:52.561434       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?allowWatchBookmarks=true&resourceVersion=183&timeout=7m44s&timeoutSeconds=464&watch=true: dial tcp 192.168.50.38:8443: connect: connection refused
	E0810 22:50:52.561572       1 reflector.go:320] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to watch *v1.ConfigMap: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&resourceVersion=29&timeout=7m4s&timeoutSeconds=424&watch=true: dial tcp 192.168.50.38:8443: connect: connection refused
	E0810 22:50:52.561597       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolumeClaim: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=1&timeout=8m4s&timeoutSeconds=484&watch=true: dial tcp 192.168.50.38:8443: connect: connection refused
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2021-08-10 22:48:34 UTC, end at Tue 2021-08-10 22:51:10 UTC. --
	Aug 10 22:51:01 test-preload-20210810224820-30291 kubelet[9150]: I0810 22:51:01.138764    9150 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8abdfb71-f1fb-4e43-a76f-55549541d3f5-config-volume") pod "coredns-6955765f44-9gbh9" (UID: "8abdfb71-f1fb-4e43-a76f-55549541d3f5")
	Aug 10 22:51:01 test-preload-20210810224820-30291 kubelet[9150]: I0810 22:51:01.138786    9150 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-2qw2p" (UniqueName: "kubernetes.io/secret/26937be5-e3e9-4284-a360-e84c2f175b4f-kube-proxy-token-2qw2p") pod "kube-proxy-zvm9g" (UID: "26937be5-e3e9-4284-a360-e84c2f175b4f")
	Aug 10 22:51:01 test-preload-20210810224820-30291 kubelet[9150]: I0810 22:51:01.138807    9150 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/26937be5-e3e9-4284-a360-e84c2f175b4f-lib-modules") pod "kube-proxy-zvm9g" (UID: "26937be5-e3e9-4284-a360-e84c2f175b4f")
	Aug 10 22:51:01 test-preload-20210810224820-30291 kubelet[9150]: I0810 22:51:01.138827    9150 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/9540d6e2-c97c-415d-b6be-20e310fbd70f-tmp") pod "storage-provisioner" (UID: "9540d6e2-c97c-415d-b6be-20e310fbd70f")
	Aug 10 22:51:01 test-preload-20210810224820-30291 kubelet[9150]: I0810 22:51:01.138848    9150 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-nkj29" (UniqueName: "kubernetes.io/secret/9540d6e2-c97c-415d-b6be-20e310fbd70f-storage-provisioner-token-nkj29") pod "storage-provisioner" (UID: "9540d6e2-c97c-415d-b6be-20e310fbd70f")
	Aug 10 22:51:01 test-preload-20210810224820-30291 kubelet[9150]: I0810 22:51:01.138877    9150 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/26937be5-e3e9-4284-a360-e84c2f175b4f-kube-proxy") pod "kube-proxy-zvm9g" (UID: "26937be5-e3e9-4284-a360-e84c2f175b4f")
	Aug 10 22:51:01 test-preload-20210810224820-30291 kubelet[9150]: I0810 22:51:01.138893    9150 reconciler.go:156] Reconciler: start to sync state
	Aug 10 22:51:01 test-preload-20210810224820-30291 kubelet[9150]: W0810 22:51:01.177930    9150 kubelet.go:1649] Deleted mirror pod "kube-apiserver-test-preload-20210810224820-30291_kube-system(e920ed40-b534-4464-8eed-4362e068df64)" because it is outdated
	Aug 10 22:51:01 test-preload-20210810224820-30291 kubelet[9150]: W0810 22:51:01.197586    9150 kubelet.go:1649] Deleted mirror pod "kube-scheduler-test-preload-20210810224820-30291_kube-system(77dfdcb8-d3b4-410e-9ca8-a4325d05e663)" because it is outdated
	Aug 10 22:51:01 test-preload-20210810224820-30291 kubelet[9150]: W0810 22:51:01.203361    9150 kubelet.go:1649] Deleted mirror pod "kube-controller-manager-test-preload-20210810224820-30291_kube-system(713d7e54-ad3f-4b30-a47e-daba1e31f65f)" because it is outdated
	Aug 10 22:51:01 test-preload-20210810224820-30291 kubelet[9150]: I0810 22:51:01.818126    9150 kubelet_node_status.go:112] Node test-preload-20210810224820-30291 was previously registered
	Aug 10 22:51:01 test-preload-20210810224820-30291 kubelet[9150]: I0810 22:51:01.818325    9150 kubelet_node_status.go:73] Successfully registered node test-preload-20210810224820-30291
	Aug 10 22:51:02 test-preload-20210810224820-30291 kubelet[9150]: E0810 22:51:02.246864    9150 configmap.go:200] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	Aug 10 22:51:02 test-preload-20210810224820-30291 kubelet[9150]: E0810 22:51:02.246886    9150 configmap.go:200] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Aug 10 22:51:02 test-preload-20210810224820-30291 kubelet[9150]: E0810 22:51:02.246903    9150 secret.go:195] Couldn't get secret kube-system/kube-proxy-token-2qw2p: failed to sync secret cache: timed out waiting for the condition
	Aug 10 22:51:02 test-preload-20210810224820-30291 kubelet[9150]: E0810 22:51:02.246915    9150 secret.go:195] Couldn't get secret kube-system/storage-provisioner-token-nkj29: failed to sync secret cache: timed out waiting for the condition
	Aug 10 22:51:02 test-preload-20210810224820-30291 kubelet[9150]: E0810 22:51:02.246927    9150 secret.go:195] Couldn't get secret kube-system/coredns-token-s2ntq: failed to sync secret cache: timed out waiting for the condition
	Aug 10 22:51:02 test-preload-20210810224820-30291 kubelet[9150]: E0810 22:51:02.247262    9150 nestedpendingoperations.go:270] Operation for "\"kubernetes.io/configmap/8abdfb71-f1fb-4e43-a76f-55549541d3f5-config-volume\" (\"8abdfb71-f1fb-4e43-a76f-55549541d3f5\")" failed. No retries permitted until 2021-08-10 22:51:02.747236006 +0000 UTC m=+8.593413153 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8abdfb71-f1fb-4e43-a76f-55549541d3f5-config-volume\") pod \"coredns-6955765f44-9gbh9\" (UID: \"8abdfb71-f1fb-4e43-a76f-55549541d3f5\") : failed to sync configmap cache: timed out waiting for the condition"
	Aug 10 22:51:02 test-preload-20210810224820-30291 kubelet[9150]: E0810 22:51:02.247479    9150 nestedpendingoperations.go:270] Operation for "\"kubernetes.io/configmap/26937be5-e3e9-4284-a360-e84c2f175b4f-kube-proxy\" (\"26937be5-e3e9-4284-a360-e84c2f175b4f\")" failed. No retries permitted until 2021-08-10 22:51:02.74745494 +0000 UTC m=+8.593632065 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/26937be5-e3e9-4284-a360-e84c2f175b4f-kube-proxy\") pod \"kube-proxy-zvm9g\" (UID: \"26937be5-e3e9-4284-a360-e84c2f175b4f\") : failed to sync configmap cache: timed out waiting for the condition"
	Aug 10 22:51:02 test-preload-20210810224820-30291 kubelet[9150]: E0810 22:51:02.247503    9150 nestedpendingoperations.go:270] Operation for "\"kubernetes.io/secret/26937be5-e3e9-4284-a360-e84c2f175b4f-kube-proxy-token-2qw2p\" (\"26937be5-e3e9-4284-a360-e84c2f175b4f\")" failed. No retries permitted until 2021-08-10 22:51:02.747485902 +0000 UTC m=+8.593663155 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"kube-proxy-token-2qw2p\" (UniqueName: \"kubernetes.io/secret/26937be5-e3e9-4284-a360-e84c2f175b4f-kube-proxy-token-2qw2p\") pod \"kube-proxy-zvm9g\" (UID: \"26937be5-e3e9-4284-a360-e84c2f175b4f\") : failed to sync secret cache: timed out waiting for the condition"
	Aug 10 22:51:02 test-preload-20210810224820-30291 kubelet[9150]: E0810 22:51:02.247520    9150 nestedpendingoperations.go:270] Operation for "\"kubernetes.io/secret/9540d6e2-c97c-415d-b6be-20e310fbd70f-storage-provisioner-token-nkj29\" (\"9540d6e2-c97c-415d-b6be-20e310fbd70f\")" failed. No retries permitted until 2021-08-10 22:51:02.747507977 +0000 UTC m=+8.593685091 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"storage-provisioner-token-nkj29\" (UniqueName: \"kubernetes.io/secret/9540d6e2-c97c-415d-b6be-20e310fbd70f-storage-provisioner-token-nkj29\") pod \"storage-provisioner\" (UID: \"9540d6e2-c97c-415d-b6be-20e310fbd70f\") : failed to sync secret cache: timed out waiting for the condition"
	Aug 10 22:51:02 test-preload-20210810224820-30291 kubelet[9150]: E0810 22:51:02.247537    9150 nestedpendingoperations.go:270] Operation for "\"kubernetes.io/secret/8abdfb71-f1fb-4e43-a76f-55549541d3f5-coredns-token-s2ntq\" (\"8abdfb71-f1fb-4e43-a76f-55549541d3f5\")" failed. No retries permitted until 2021-08-10 22:51:02.747525929 +0000 UTC m=+8.593703045 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"coredns-token-s2ntq\" (UniqueName: \"kubernetes.io/secret/8abdfb71-f1fb-4e43-a76f-55549541d3f5-coredns-token-s2ntq\") pod \"coredns-6955765f44-9gbh9\" (UID: \"8abdfb71-f1fb-4e43-a76f-55549541d3f5\") : failed to sync secret cache: timed out waiting for the condition"
	Aug 10 22:51:03 test-preload-20210810224820-30291 kubelet[9150]: W0810 22:51:03.682420    9150 pod_container_deletor.go:75] Container "8129021bbf4ea8e5b5171ea9104ac69b5e520e6eb2031086365e913888c40529" not found in pod's containers
	Aug 10 22:51:03 test-preload-20210810224820-30291 kubelet[9150]: W0810 22:51:03.700720    9150 pod_container_deletor.go:75] Container "f379dea9d5ffcac7e24e23f2e0dab8b1fc4800f05db9951983bce94277e3379a" not found in pod's containers
	Aug 10 22:51:03 test-preload-20210810224820-30291 kubelet[9150]: W0810 22:51:03.709816    9150 pod_container_deletor.go:75] Container "b231d093e88f22e3b7214d52cd57d587762a584f1cc852d9d3e226930f794bab" not found in pod's containers
	
	* 
	* ==> storage-provisioner [5460150a2fee8086d52a326ac54ea685b608f48471759d6268917322e4aa38d3] <==
	* I0810 22:51:03.601413       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0810 22:51:03.682522       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0810 22:51:03.683177       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	* 
	* ==> storage-provisioner [b9f131c8976914b86f7ef77a7914c676182f1e4fa608de3ce864138068cb90c3] <==
	* I0810 22:50:30.684424       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0810 22:50:30.705939       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0810 22:50:30.706804       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0810 22:50:30.726991       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0810 22:50:30.727888       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_test-preload-20210810224820-30291_e21da6ec-f3e0-428a-b051-102c096d85a3!
	I0810 22:50:30.732498       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c5e95e82-e02c-4e1d-9409-5fa968e3bf86", APIVersion:"v1", ResourceVersion:"394", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' test-preload-20210810224820-30291_e21da6ec-f3e0-428a-b051-102c096d85a3 became leader
	I0810 22:50:30.829073       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_test-preload-20210810224820-30291_e21da6ec-f3e0-428a-b051-102c096d85a3!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-20210810224820-30291 -n test-preload-20210810224820-30291
helpers_test.go:262: (dbg) Run:  kubectl --context test-preload-20210810224820-30291 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:268: non-running pods: 
helpers_test.go:270: ======> post-mortem[TestPreload]: describe non-running pods <======
helpers_test.go:273: (dbg) Run:  kubectl --context test-preload-20210810224820-30291 describe pod 
helpers_test.go:273: (dbg) Non-zero exit: kubectl --context test-preload-20210810224820-30291 describe pod : exit status 1 (50.292734ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:275: kubectl --context test-preload-20210810224820-30291 describe pod : exit status 1
helpers_test.go:176: Cleaning up "test-preload-20210810224820-30291" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-20210810224820-30291
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-20210810224820-30291: (1.137503759s)
--- FAIL: TestPreload (172.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (111.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p calico-20210810225506-30291 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=kvm2  --container-runtime=crio
E0810 22:59:16.115850   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810221736-30291/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:98: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p calico-20210810225506-30291 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=kvm2  --container-runtime=crio: exit status 80 (1m51.840767037s)

                                                
                                                
-- stdout --
	* [calico-20210810225506-30291] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	  - MINIKUBE_LOCATION=12230
	* Using the kvm2 driver based on user configuration
	* Starting control plane node calico-20210810225506-30291 in cluster calico-20210810225506-30291
	* Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.21.3 on CRI-O 1.20.2 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Calico (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Calico (Container Networking Interface) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0810 22:59:13.054486    5770 out.go:298] Setting OutFile to fd 1 ...
	I0810 22:59:13.054559    5770 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:59:13.054564    5770 out.go:311] Setting ErrFile to fd 2...
	I0810 22:59:13.054568    5770 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:59:13.054737    5770 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin
	I0810 22:59:13.055110    5770 out.go:305] Setting JSON to false
	I0810 22:59:13.105054    5770 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-3","uptime":9713,"bootTime":1628626640,"procs":193,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0810 22:59:13.105242    5770 start.go:121] virtualization: kvm guest
	I0810 22:59:13.107352    5770 out.go:177] * [calico-20210810225506-30291] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0810 22:59:13.108900    5770 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0810 22:59:13.107533    5770 notify.go:169] Checking for updates...
	I0810 22:59:13.110797    5770 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0810 22:59:13.112520    5770 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	I0810 22:59:13.114090    5770 out.go:177]   - MINIKUBE_LOCATION=12230
	I0810 22:59:13.114983    5770 driver.go:335] Setting default libvirt URI to qemu:///system
	I0810 22:59:13.157592    5770 out.go:177] * Using the kvm2 driver based on user configuration
	I0810 22:59:13.157625    5770 start.go:278] selected driver: kvm2
	I0810 22:59:13.157632    5770 start.go:751] validating driver "kvm2" against <nil>
	I0810 22:59:13.157656    5770 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0810 22:59:13.159181    5770 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:59:13.159436    5770 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0810 22:59:13.175855    5770 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0810 22:59:13.175950    5770 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0810 22:59:13.176219    5770 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0810 22:59:13.176263    5770 cni.go:93] Creating CNI manager for "calico"
	I0810 22:59:13.176274    5770 start_flags.go:272] Found "Calico" CNI - setting NetworkPlugin=cni
	I0810 22:59:13.176289    5770 start_flags.go:277] config:
	{Name:calico-20210810225506-30291 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:calico-20210810225506-30291 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Networ
kPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0810 22:59:13.176416    5770 iso.go:123] acquiring lock: {Name:mke8829815ca14456120fefc524d0a056bf82da0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:59:13.178352    5770 out.go:177] * Starting control plane node calico-20210810225506-30291 in cluster calico-20210810225506-30291
	I0810 22:59:13.178375    5770 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0810 22:59:13.178416    5770 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4
	I0810 22:59:13.178433    5770 cache.go:56] Caching tarball of preloaded images
	I0810 22:59:13.178551    5770 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0810 22:59:13.178568    5770 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on crio
	I0810 22:59:13.178691    5770 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210810225506-30291/config.json ...
	I0810 22:59:13.178717    5770 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210810225506-30291/config.json: {Name:mk154ac3d2c3dd3aa10faf4edd4636cb2a5e8717 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:59:13.178860    5770 cache.go:205] Successfully downloaded all kic artifacts
	I0810 22:59:13.178891    5770 start.go:313] acquiring machines lock for calico-20210810225506-30291: {Name:mk9647f7c84b24381af0d3e731fd883065efc3b8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0810 22:59:13.178928    5770 start.go:317] acquired machines lock for "calico-20210810225506-30291" in 24.89µs
	I0810 22:59:13.178945    5770 start.go:89] Provisioning new machine with config: &{Name:calico-20210810225506-30291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12122/minikube-v1.22.0-1628238775-12122.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 Clus
terName:calico-20210810225506-30291 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0810 22:59:13.179009    5770 start.go:126] createHost starting for "" (driver="kvm2")
	I0810 22:59:13.180896    5770 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0810 22:59:13.181045    5770 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:59:13.181137    5770 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:59:13.194224    5770 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43411
	I0810 22:59:13.194755    5770 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:59:13.195354    5770 main.go:130] libmachine: Using API Version  1
	I0810 22:59:13.195374    5770 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:59:13.195715    5770 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:59:13.195846    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .GetMachineName
	I0810 22:59:13.195980    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .DriverName
	I0810 22:59:13.196153    5770 start.go:160] libmachine.API.Create for "calico-20210810225506-30291" (driver="kvm2")
	I0810 22:59:13.196189    5770 client.go:168] LocalClient.Create starting
	I0810 22:59:13.196227    5770 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem
	I0810 22:59:13.196255    5770 main.go:130] libmachine: Decoding PEM data...
	I0810 22:59:13.196277    5770 main.go:130] libmachine: Parsing certificate...
	I0810 22:59:13.196409    5770 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem
	I0810 22:59:13.196434    5770 main.go:130] libmachine: Decoding PEM data...
	I0810 22:59:13.196454    5770 main.go:130] libmachine: Parsing certificate...
	I0810 22:59:13.196512    5770 main.go:130] libmachine: Running pre-create checks...
	I0810 22:59:13.196527    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .PreCreateCheck
	I0810 22:59:13.196875    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .GetConfigRaw
	I0810 22:59:13.197270    5770 main.go:130] libmachine: Creating machine...
	I0810 22:59:13.197286    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .Create
	I0810 22:59:13.197412    5770 main.go:130] libmachine: (calico-20210810225506-30291) Creating KVM machine...
	I0810 22:59:13.200381    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | found existing default KVM network
	I0810 22:59:13.202128    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | I0810 22:59:13.201949    5794 network.go:240] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:b4:06:d9}}
	I0810 22:59:13.203948    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | I0810 22:59:13.203841    5794 network.go:240] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:70:7c:a1}}
	I0810 22:59:13.205434    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | I0810 22:59:13.205330    5794 network.go:240] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:ee:53:1e}}
	I0810 22:59:13.206388    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | I0810 22:59:13.206291    5794 network.go:240] skipping subnet 192.168.72.0/24 that is taken: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 Interface:{IfaceName:virbr4 IfaceIPv4:192.168.72.1 IfaceMTU:1500 IfaceMAC:52:54:00:45:d3:67}}
	I0810 22:59:13.208727    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | I0810 22:59:13.208626    5794 network.go:240] skipping subnet 192.168.83.0/24 that is taken: &{IP:192.168.83.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.83.0/24 Gateway:192.168.83.1 ClientMin:192.168.83.2 ClientMax:192.168.83.254 Broadcast:192.168.83.255 Interface:{IfaceName:virbr5 IfaceIPv4:192.168.83.1 IfaceMTU:1500 IfaceMAC:52:54:00:ea:76:4e}}
	I0810 22:59:13.210367    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | I0810 22:59:13.210264    5794 network.go:288] reserving subnet 192.168.94.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.94.0:0xc000190028] misses:0}
	I0810 22:59:13.210401    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | I0810 22:59:13.210305    5794 network.go:235] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0810 22:59:13.238343    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | trying to create private KVM network mk-calico-20210810225506-30291 192.168.94.0/24...
	I0810 22:59:13.548797    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | private KVM network mk-calico-20210810225506-30291 192.168.94.0/24 created
	I0810 22:59:13.548854    5770 main.go:130] libmachine: (calico-20210810225506-30291) Setting up store path in /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/calico-20210810225506-30291 ...
	I0810 22:59:13.548881    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | I0810 22:59:13.548783    5794 common.go:108] Making disk image using store path: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	I0810 22:59:13.548924    5770 main.go:130] libmachine: (calico-20210810225506-30291) Building disk image from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/iso/minikube-v1.22.0-1628238775-12122.iso
	I0810 22:59:13.549067    5770 main.go:130] libmachine: (calico-20210810225506-30291) Downloading /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/iso/minikube-v1.22.0-1628238775-12122.iso...
	I0810 22:59:13.766951    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | I0810 22:59:13.766820    5794 common.go:115] Creating ssh key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/calico-20210810225506-30291/id_rsa...
	I0810 22:59:13.927229    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | I0810 22:59:13.927095    5794 common.go:121] Creating raw disk image: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/calico-20210810225506-30291/calico-20210810225506-30291.rawdisk...
	I0810 22:59:13.927263    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | Writing magic tar header
	I0810 22:59:13.927282    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | Writing SSH key tar header
	I0810 22:59:13.927335    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | I0810 22:59:13.927294    5794 common.go:135] Fixing permissions on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/calico-20210810225506-30291 ...
	I0810 22:59:13.927454    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/calico-20210810225506-30291
	I0810 22:59:13.927519    5770 main.go:130] libmachine: (calico-20210810225506-30291) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/calico-20210810225506-30291 (perms=drwx------)
	I0810 22:59:13.927546    5770 main.go:130] libmachine: (calico-20210810225506-30291) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines (perms=drwxr-xr-x)
	I0810 22:59:13.927563    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines
	I0810 22:59:13.927601    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	I0810 22:59:13.927632    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0
	I0810 22:59:13.927657    5770 main.go:130] libmachine: (calico-20210810225506-30291) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube (perms=drwxr-xr-x)
	I0810 22:59:13.927681    5770 main.go:130] libmachine: (calico-20210810225506-30291) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0 (perms=drwxr-xr-x)
	I0810 22:59:13.927694    5770 main.go:130] libmachine: (calico-20210810225506-30291) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxr-xr-x)
	I0810 22:59:13.927707    5770 main.go:130] libmachine: (calico-20210810225506-30291) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0810 22:59:13.927719    5770 main.go:130] libmachine: (calico-20210810225506-30291) Creating domain...
	I0810 22:59:13.927756    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0810 22:59:13.927778    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | Checking permissions on dir: /home/jenkins
	I0810 22:59:13.927791    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | Checking permissions on dir: /home
	I0810 22:59:13.927801    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | Skipping /home - not owner
	I0810 22:59:13.956502    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | domain calico-20210810225506-30291 has defined MAC address 52:54:00:e1:1f:89 in network default
	I0810 22:59:13.957145    5770 main.go:130] libmachine: (calico-20210810225506-30291) Ensuring networks are active...
	I0810 22:59:13.957172    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | domain calico-20210810225506-30291 has defined MAC address 52:54:00:5a:2e:fb in network mk-calico-20210810225506-30291
	I0810 22:59:13.960674    5770 main.go:130] libmachine: (calico-20210810225506-30291) Ensuring network default is active
	I0810 22:59:13.961059    5770 main.go:130] libmachine: (calico-20210810225506-30291) Ensuring network mk-calico-20210810225506-30291 is active
	I0810 22:59:13.961738    5770 main.go:130] libmachine: (calico-20210810225506-30291) Getting domain xml...
	I0810 22:59:13.963854    5770 main.go:130] libmachine: (calico-20210810225506-30291) Creating domain...
	I0810 22:59:14.477020    5770 main.go:130] libmachine: (calico-20210810225506-30291) Waiting to get IP...
	I0810 22:59:14.478025    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | domain calico-20210810225506-30291 has defined MAC address 52:54:00:5a:2e:fb in network mk-calico-20210810225506-30291
	I0810 22:59:14.478566    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | unable to find current IP address of domain calico-20210810225506-30291 in network mk-calico-20210810225506-30291
	I0810 22:59:14.478601    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | I0810 22:59:14.478496    5794 retry.go:31] will retry after 263.082536ms: waiting for machine to come up
	I0810 22:59:14.742860    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | domain calico-20210810225506-30291 has defined MAC address 52:54:00:5a:2e:fb in network mk-calico-20210810225506-30291
	I0810 22:59:14.743422    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | unable to find current IP address of domain calico-20210810225506-30291 in network mk-calico-20210810225506-30291
	I0810 22:59:14.743455    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | I0810 22:59:14.743364    5794 retry.go:31] will retry after 381.329545ms: waiting for machine to come up
	I0810 22:59:15.126062    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | domain calico-20210810225506-30291 has defined MAC address 52:54:00:5a:2e:fb in network mk-calico-20210810225506-30291
	I0810 22:59:15.126555    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | unable to find current IP address of domain calico-20210810225506-30291 in network mk-calico-20210810225506-30291
	I0810 22:59:15.126592    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | I0810 22:59:15.126493    5794 retry.go:31] will retry after 422.765636ms: waiting for machine to come up
	I0810 22:59:15.550578    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | domain calico-20210810225506-30291 has defined MAC address 52:54:00:5a:2e:fb in network mk-calico-20210810225506-30291
	I0810 22:59:15.551169    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | unable to find current IP address of domain calico-20210810225506-30291 in network mk-calico-20210810225506-30291
	I0810 22:59:15.551199    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | I0810 22:59:15.551118    5794 retry.go:31] will retry after 473.074753ms: waiting for machine to come up
	I0810 22:59:16.025599    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | domain calico-20210810225506-30291 has defined MAC address 52:54:00:5a:2e:fb in network mk-calico-20210810225506-30291
	I0810 22:59:16.026097    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | unable to find current IP address of domain calico-20210810225506-30291 in network mk-calico-20210810225506-30291
	I0810 22:59:16.026126    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | I0810 22:59:16.026040    5794 retry.go:31] will retry after 587.352751ms: waiting for machine to come up
	I0810 22:59:16.614688    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | domain calico-20210810225506-30291 has defined MAC address 52:54:00:5a:2e:fb in network mk-calico-20210810225506-30291
	I0810 22:59:16.615327    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | unable to find current IP address of domain calico-20210810225506-30291 in network mk-calico-20210810225506-30291
	I0810 22:59:16.615357    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | I0810 22:59:16.615222    5794 retry.go:31] will retry after 834.206799ms: waiting for machine to come up
	I0810 22:59:17.450954    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | domain calico-20210810225506-30291 has defined MAC address 52:54:00:5a:2e:fb in network mk-calico-20210810225506-30291
	I0810 22:59:17.451392    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | unable to find current IP address of domain calico-20210810225506-30291 in network mk-calico-20210810225506-30291
	I0810 22:59:17.451436    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | I0810 22:59:17.451349    5794 retry.go:31] will retry after 746.553905ms: waiting for machine to come up
	I0810 22:59:18.199827    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | domain calico-20210810225506-30291 has defined MAC address 52:54:00:5a:2e:fb in network mk-calico-20210810225506-30291
	I0810 22:59:18.200353    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | unable to find current IP address of domain calico-20210810225506-30291 in network mk-calico-20210810225506-30291
	I0810 22:59:18.200385    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | I0810 22:59:18.200295    5794 retry.go:31] will retry after 987.362415ms: waiting for machine to come up
	I0810 22:59:19.189256    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | domain calico-20210810225506-30291 has defined MAC address 52:54:00:5a:2e:fb in network mk-calico-20210810225506-30291
	I0810 22:59:19.189795    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | unable to find current IP address of domain calico-20210810225506-30291 in network mk-calico-20210810225506-30291
	I0810 22:59:19.189826    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | I0810 22:59:19.189772    5794 retry.go:31] will retry after 1.189835008s: waiting for machine to come up
	I0810 22:59:20.381017    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | domain calico-20210810225506-30291 has defined MAC address 52:54:00:5a:2e:fb in network mk-calico-20210810225506-30291
	I0810 22:59:20.381541    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | unable to find current IP address of domain calico-20210810225506-30291 in network mk-calico-20210810225506-30291
	I0810 22:59:20.381577    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | I0810 22:59:20.381498    5794 retry.go:31] will retry after 1.677229867s: waiting for machine to come up
	I0810 22:59:22.061414    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | domain calico-20210810225506-30291 has defined MAC address 52:54:00:5a:2e:fb in network mk-calico-20210810225506-30291
	I0810 22:59:22.061881    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | unable to find current IP address of domain calico-20210810225506-30291 in network mk-calico-20210810225506-30291
	I0810 22:59:22.061911    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | I0810 22:59:22.061807    5794 retry.go:31] will retry after 2.346016261s: waiting for machine to come up
	I0810 22:59:24.409373    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | domain calico-20210810225506-30291 has defined MAC address 52:54:00:5a:2e:fb in network mk-calico-20210810225506-30291
	I0810 22:59:24.409950    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | unable to find current IP address of domain calico-20210810225506-30291 in network mk-calico-20210810225506-30291
	I0810 22:59:24.410087    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | I0810 22:59:24.410016    5794 retry.go:31] will retry after 3.36678925s: waiting for machine to come up
	I0810 22:59:27.778800    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | domain calico-20210810225506-30291 has defined MAC address 52:54:00:5a:2e:fb in network mk-calico-20210810225506-30291
	I0810 22:59:27.779392    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | unable to find current IP address of domain calico-20210810225506-30291 in network mk-calico-20210810225506-30291
	I0810 22:59:27.779429    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | I0810 22:59:27.779305    5794 retry.go:31] will retry after 3.11822781s: waiting for machine to come up
	I0810 22:59:30.898890    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | domain calico-20210810225506-30291 has defined MAC address 52:54:00:5a:2e:fb in network mk-calico-20210810225506-30291
	I0810 22:59:30.899475    5770 main.go:130] libmachine: (calico-20210810225506-30291) Found IP for machine: 192.168.94.122
	I0810 22:59:30.899498    5770 main.go:130] libmachine: (calico-20210810225506-30291) Reserving static IP address...
	I0810 22:59:30.899523    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | domain calico-20210810225506-30291 has current primary IP address 192.168.94.122 and MAC address 52:54:00:5a:2e:fb in network mk-calico-20210810225506-30291
	I0810 22:59:30.899875    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | unable to find host DHCP lease matching {name: "calico-20210810225506-30291", mac: "52:54:00:5a:2e:fb", ip: "192.168.94.122"} in network mk-calico-20210810225506-30291
	I0810 22:59:30.960164    5770 main.go:130] libmachine: (calico-20210810225506-30291) Reserved static IP address: 192.168.94.122
	I0810 22:59:30.960203    5770 main.go:130] libmachine: (calico-20210810225506-30291) Waiting for SSH to be available...
	I0810 22:59:30.960212    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | Getting to WaitForSSH function...
	I0810 22:59:30.967304    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | domain calico-20210810225506-30291 has defined MAC address 52:54:00:5a:2e:fb in network mk-calico-20210810225506-30291
	I0810 22:59:30.967821    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:2e:fb", ip: ""} in network mk-calico-20210810225506-30291: {Iface:virbr6 ExpiryTime:2021-08-10 23:59:29 +0000 UTC Type:0 Mac:52:54:00:5a:2e:fb Iaid: IPaddr:192.168.94.122 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5a:2e:fb}
	I0810 22:59:30.967872    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | domain calico-20210810225506-30291 has defined IP address 192.168.94.122 and MAC address 52:54:00:5a:2e:fb in network mk-calico-20210810225506-30291
	I0810 22:59:30.968164    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | Using SSH client type: external
	I0810 22:59:30.968211    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/calico-20210810225506-30291/id_rsa (-rw-------)
	I0810 22:59:30.968255    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.94.122 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/calico-20210810225506-30291/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0810 22:59:30.968276    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | About to run SSH command:
	I0810 22:59:30.968292    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | exit 0
	I0810 22:59:31.120973    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | SSH cmd err, output: <nil>: 
	I0810 22:59:31.121579    5770 main.go:130] libmachine: (calico-20210810225506-30291) KVM machine creation complete!
	I0810 22:59:31.121604    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .GetConfigRaw
	I0810 22:59:31.122271    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .DriverName
	I0810 22:59:31.122531    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .DriverName
	I0810 22:59:31.122695    5770 main.go:130] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0810 22:59:31.122715    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .GetState
	I0810 22:59:31.125969    5770 main.go:130] libmachine: Detecting operating system of created instance...
	I0810 22:59:31.125988    5770 main.go:130] libmachine: Waiting for SSH to be available...
	I0810 22:59:31.125997    5770 main.go:130] libmachine: Getting to WaitForSSH function...
	I0810 22:59:31.126016    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .GetSSHHostname
	I0810 22:59:31.131955    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | domain calico-20210810225506-30291 has defined MAC address 52:54:00:5a:2e:fb in network mk-calico-20210810225506-30291
	I0810 22:59:31.132429    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:2e:fb", ip: ""} in network mk-calico-20210810225506-30291: {Iface:virbr6 ExpiryTime:2021-08-10 23:59:29 +0000 UTC Type:0 Mac:52:54:00:5a:2e:fb Iaid: IPaddr:192.168.94.122 Prefix:24 Hostname:calico-20210810225506-30291 Clientid:01:52:54:00:5a:2e:fb}
	I0810 22:59:31.132456    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | domain calico-20210810225506-30291 has defined IP address 192.168.94.122 and MAC address 52:54:00:5a:2e:fb in network mk-calico-20210810225506-30291
	I0810 22:59:31.132815    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .GetSSHPort
	I0810 22:59:31.133077    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .GetSSHKeyPath
	I0810 22:59:31.133278    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .GetSSHKeyPath
	I0810 22:59:31.133451    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .GetSSHUsername
	I0810 22:59:31.133615    5770 main.go:130] libmachine: Using SSH client type: native
	I0810 22:59:31.133814    5770 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.94.122 22 <nil> <nil>}
	I0810 22:59:31.133825    5770 main.go:130] libmachine: About to run SSH command:
	exit 0
	I0810 22:59:31.280660    5770 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0810 22:59:31.280691    5770 main.go:130] libmachine: Detecting the provisioner...
	I0810 22:59:31.280703    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .GetSSHHostname
	I0810 22:59:31.287000    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | domain calico-20210810225506-30291 has defined MAC address 52:54:00:5a:2e:fb in network mk-calico-20210810225506-30291
	I0810 22:59:31.287374    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:2e:fb", ip: ""} in network mk-calico-20210810225506-30291: {Iface:virbr6 ExpiryTime:2021-08-10 23:59:29 +0000 UTC Type:0 Mac:52:54:00:5a:2e:fb Iaid: IPaddr:192.168.94.122 Prefix:24 Hostname:calico-20210810225506-30291 Clientid:01:52:54:00:5a:2e:fb}
	I0810 22:59:31.287408    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | domain calico-20210810225506-30291 has defined IP address 192.168.94.122 and MAC address 52:54:00:5a:2e:fb in network mk-calico-20210810225506-30291
	I0810 22:59:31.287588    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .GetSSHPort
	I0810 22:59:31.287761    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .GetSSHKeyPath
	I0810 22:59:31.287915    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .GetSSHKeyPath
	I0810 22:59:31.288060    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .GetSSHUsername
	I0810 22:59:31.288296    5770 main.go:130] libmachine: Using SSH client type: native
	I0810 22:59:31.288493    5770 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.94.122 22 <nil> <nil>}
	I0810 22:59:31.288508    5770 main.go:130] libmachine: About to run SSH command:
	cat /etc/os-release
	I0810 22:59:31.430952    5770 main.go:130] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2020.02.12
	ID=buildroot
	VERSION_ID=2020.02.12
	PRETTY_NAME="Buildroot 2020.02.12"
	
	I0810 22:59:31.431043    5770 main.go:130] libmachine: found compatible host: buildroot
	I0810 22:59:31.431057    5770 main.go:130] libmachine: Provisioning with buildroot...
	I0810 22:59:31.431069    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .GetMachineName
	I0810 22:59:31.431359    5770 buildroot.go:166] provisioning hostname "calico-20210810225506-30291"
	I0810 22:59:31.431426    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .GetMachineName
	I0810 22:59:31.431667    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .GetSSHHostname
	I0810 22:59:31.437961    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | domain calico-20210810225506-30291 has defined MAC address 52:54:00:5a:2e:fb in network mk-calico-20210810225506-30291
	I0810 22:59:31.438398    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:2e:fb", ip: ""} in network mk-calico-20210810225506-30291: {Iface:virbr6 ExpiryTime:2021-08-10 23:59:29 +0000 UTC Type:0 Mac:52:54:00:5a:2e:fb Iaid: IPaddr:192.168.94.122 Prefix:24 Hostname:calico-20210810225506-30291 Clientid:01:52:54:00:5a:2e:fb}
	I0810 22:59:31.438425    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | domain calico-20210810225506-30291 has defined IP address 192.168.94.122 and MAC address 52:54:00:5a:2e:fb in network mk-calico-20210810225506-30291
	I0810 22:59:31.438660    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .GetSSHPort
	I0810 22:59:31.438882    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .GetSSHKeyPath
	I0810 22:59:31.439062    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .GetSSHKeyPath
	I0810 22:59:31.439209    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .GetSSHUsername
	I0810 22:59:31.439455    5770 main.go:130] libmachine: Using SSH client type: native
	I0810 22:59:31.439645    5770 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.94.122 22 <nil> <nil>}
	I0810 22:59:31.439666    5770 main.go:130] libmachine: About to run SSH command:
	sudo hostname calico-20210810225506-30291 && echo "calico-20210810225506-30291" | sudo tee /etc/hostname
	I0810 22:59:31.588051    5770 main.go:130] libmachine: SSH cmd err, output: <nil>: calico-20210810225506-30291
	
	I0810 22:59:31.588082    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .GetSSHHostname
	I0810 22:59:31.594279    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | domain calico-20210810225506-30291 has defined MAC address 52:54:00:5a:2e:fb in network mk-calico-20210810225506-30291
	I0810 22:59:31.594698    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:2e:fb", ip: ""} in network mk-calico-20210810225506-30291: {Iface:virbr6 ExpiryTime:2021-08-10 23:59:29 +0000 UTC Type:0 Mac:52:54:00:5a:2e:fb Iaid: IPaddr:192.168.94.122 Prefix:24 Hostname:calico-20210810225506-30291 Clientid:01:52:54:00:5a:2e:fb}
	I0810 22:59:31.594733    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | domain calico-20210810225506-30291 has defined IP address 192.168.94.122 and MAC address 52:54:00:5a:2e:fb in network mk-calico-20210810225506-30291
	I0810 22:59:31.594882    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .GetSSHPort
	I0810 22:59:31.595088    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .GetSSHKeyPath
	I0810 22:59:31.595272    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .GetSSHKeyPath
	I0810 22:59:31.595415    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .GetSSHUsername
	I0810 22:59:31.595581    5770 main.go:130] libmachine: Using SSH client type: native
	I0810 22:59:31.595755    5770 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.94.122 22 <nil> <nil>}
	I0810 22:59:31.595785    5770 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-20210810225506-30291' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-20210810225506-30291/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-20210810225506-30291' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0810 22:59:31.735273    5770 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0810 22:59:31.735305    5770 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem ServerCertRemotePath:/etc/
docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube}
	I0810 22:59:31.735329    5770 buildroot.go:174] setting up certificates
	I0810 22:59:31.735342    5770 provision.go:83] configureAuth start
	I0810 22:59:31.735354    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .GetMachineName
	I0810 22:59:31.735655    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .GetIP
	I0810 22:59:31.741489    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | domain calico-20210810225506-30291 has defined MAC address 52:54:00:5a:2e:fb in network mk-calico-20210810225506-30291
	I0810 22:59:31.741893    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:2e:fb", ip: ""} in network mk-calico-20210810225506-30291: {Iface:virbr6 ExpiryTime:2021-08-10 23:59:29 +0000 UTC Type:0 Mac:52:54:00:5a:2e:fb Iaid: IPaddr:192.168.94.122 Prefix:24 Hostname:calico-20210810225506-30291 Clientid:01:52:54:00:5a:2e:fb}
	I0810 22:59:31.741915    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | domain calico-20210810225506-30291 has defined IP address 192.168.94.122 and MAC address 52:54:00:5a:2e:fb in network mk-calico-20210810225506-30291
	I0810 22:59:31.742075    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .GetSSHHostname
	I0810 22:59:31.746901    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | domain calico-20210810225506-30291 has defined MAC address 52:54:00:5a:2e:fb in network mk-calico-20210810225506-30291
	I0810 22:59:31.747291    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:2e:fb", ip: ""} in network mk-calico-20210810225506-30291: {Iface:virbr6 ExpiryTime:2021-08-10 23:59:29 +0000 UTC Type:0 Mac:52:54:00:5a:2e:fb Iaid: IPaddr:192.168.94.122 Prefix:24 Hostname:calico-20210810225506-30291 Clientid:01:52:54:00:5a:2e:fb}
	I0810 22:59:31.747330    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | domain calico-20210810225506-30291 has defined IP address 192.168.94.122 and MAC address 52:54:00:5a:2e:fb in network mk-calico-20210810225506-30291
	I0810 22:59:31.747451    5770 provision.go:137] copyHostCerts
	I0810 22:59:31.747522    5770 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem, removing ...
	I0810 22:59:31.747533    5770 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem
	I0810 22:59:31.747596    5770 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem (1082 bytes)
	I0810 22:59:31.747710    5770 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem, removing ...
	I0810 22:59:31.747723    5770 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem
	I0810 22:59:31.747753    5770 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem (1123 bytes)
	I0810 22:59:31.747816    5770 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem, removing ...
	I0810 22:59:31.747833    5770 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem
	I0810 22:59:31.747872    5770 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem (1679 bytes)
	I0810 22:59:31.747928    5770 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem org=jenkins.calico-20210810225506-30291 san=[192.168.94.122 192.168.94.122 localhost 127.0.0.1 minikube calico-20210810225506-30291]
	I0810 22:59:31.832101    5770 provision.go:171] copyRemoteCerts
	I0810 22:59:31.832181    5770 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0810 22:59:31.832216    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .GetSSHHostname
	I0810 22:59:31.838635    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | domain calico-20210810225506-30291 has defined MAC address 52:54:00:5a:2e:fb in network mk-calico-20210810225506-30291
	I0810 22:59:31.839030    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:2e:fb", ip: ""} in network mk-calico-20210810225506-30291: {Iface:virbr6 ExpiryTime:2021-08-10 23:59:29 +0000 UTC Type:0 Mac:52:54:00:5a:2e:fb Iaid: IPaddr:192.168.94.122 Prefix:24 Hostname:calico-20210810225506-30291 Clientid:01:52:54:00:5a:2e:fb}
	I0810 22:59:31.839063    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | domain calico-20210810225506-30291 has defined IP address 192.168.94.122 and MAC address 52:54:00:5a:2e:fb in network mk-calico-20210810225506-30291
	I0810 22:59:31.839217    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .GetSSHPort
	I0810 22:59:31.839407    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .GetSSHKeyPath
	I0810 22:59:31.839580    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .GetSSHUsername
	I0810 22:59:31.839725    5770 sshutil.go:53] new ssh client: &{IP:192.168.94.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/calico-20210810225506-30291/id_rsa Username:docker}
	I0810 22:59:31.939984    5770 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0810 22:59:31.960214    5770 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0810 22:59:31.986185    5770 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0810 22:59:32.013168    5770 provision.go:86] duration metric: configureAuth took 277.811303ms
	I0810 22:59:32.013201    5770 buildroot.go:189] setting minikube options for container-runtime
	I0810 22:59:32.013500    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .GetSSHHostname
	I0810 22:59:32.019826    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | domain calico-20210810225506-30291 has defined MAC address 52:54:00:5a:2e:fb in network mk-calico-20210810225506-30291
	I0810 22:59:32.020191    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:2e:fb", ip: ""} in network mk-calico-20210810225506-30291: {Iface:virbr6 ExpiryTime:2021-08-10 23:59:29 +0000 UTC Type:0 Mac:52:54:00:5a:2e:fb Iaid: IPaddr:192.168.94.122 Prefix:24 Hostname:calico-20210810225506-30291 Clientid:01:52:54:00:5a:2e:fb}
	I0810 22:59:32.020224    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | domain calico-20210810225506-30291 has defined IP address 192.168.94.122 and MAC address 52:54:00:5a:2e:fb in network mk-calico-20210810225506-30291
	I0810 22:59:32.020423    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .GetSSHPort
	I0810 22:59:32.020617    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .GetSSHKeyPath
	I0810 22:59:32.020795    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .GetSSHKeyPath
	I0810 22:59:32.020953    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .GetSSHUsername
	I0810 22:59:32.021130    5770 main.go:130] libmachine: Using SSH client type: native
	I0810 22:59:32.021269    5770 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.94.122 22 <nil> <nil>}
	I0810 22:59:32.021286    5770 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0810 22:59:32.904153    5770 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0810 22:59:32.904239    5770 main.go:130] libmachine: Checking connection to Docker...
	I0810 22:59:32.904272    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .GetURL
	I0810 22:59:32.907517    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | Using libvirt version 3000000
	I0810 22:59:32.913104    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | domain calico-20210810225506-30291 has defined MAC address 52:54:00:5a:2e:fb in network mk-calico-20210810225506-30291
	I0810 22:59:32.913651    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:2e:fb", ip: ""} in network mk-calico-20210810225506-30291: {Iface:virbr6 ExpiryTime:2021-08-10 23:59:29 +0000 UTC Type:0 Mac:52:54:00:5a:2e:fb Iaid: IPaddr:192.168.94.122 Prefix:24 Hostname:calico-20210810225506-30291 Clientid:01:52:54:00:5a:2e:fb}
	I0810 22:59:32.913689    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | domain calico-20210810225506-30291 has defined IP address 192.168.94.122 and MAC address 52:54:00:5a:2e:fb in network mk-calico-20210810225506-30291
	I0810 22:59:32.913957    5770 main.go:130] libmachine: Docker is up and running!
	I0810 22:59:32.913975    5770 main.go:130] libmachine: Reticulating splines...
	I0810 22:59:32.913983    5770 client.go:171] LocalClient.Create took 19.717782447s
	I0810 22:59:32.914003    5770 start.go:168] duration metric: libmachine.API.Create for "calico-20210810225506-30291" took 19.71789029s
	I0810 22:59:32.914017    5770 start.go:267] post-start starting for "calico-20210810225506-30291" (driver="kvm2")
	I0810 22:59:32.914023    5770 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0810 22:59:32.914044    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .DriverName
	I0810 22:59:32.914270    5770 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0810 22:59:32.914299    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .GetSSHHostname
	I0810 22:59:32.919382    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | domain calico-20210810225506-30291 has defined MAC address 52:54:00:5a:2e:fb in network mk-calico-20210810225506-30291
	I0810 22:59:32.919809    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:2e:fb", ip: ""} in network mk-calico-20210810225506-30291: {Iface:virbr6 ExpiryTime:2021-08-10 23:59:29 +0000 UTC Type:0 Mac:52:54:00:5a:2e:fb Iaid: IPaddr:192.168.94.122 Prefix:24 Hostname:calico-20210810225506-30291 Clientid:01:52:54:00:5a:2e:fb}
	I0810 22:59:32.919877    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | domain calico-20210810225506-30291 has defined IP address 192.168.94.122 and MAC address 52:54:00:5a:2e:fb in network mk-calico-20210810225506-30291
	I0810 22:59:32.920032    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .GetSSHPort
	I0810 22:59:32.920235    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .GetSSHKeyPath
	I0810 22:59:32.920366    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .GetSSHUsername
	I0810 22:59:32.920481    5770 sshutil.go:53] new ssh client: &{IP:192.168.94.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/calico-20210810225506-30291/id_rsa Username:docker}
	I0810 22:59:33.021347    5770 ssh_runner.go:149] Run: cat /etc/os-release
	I0810 22:59:33.026842    5770 info.go:137] Remote host: Buildroot 2020.02.12
	I0810 22:59:33.026872    5770 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/addons for local assets ...
	I0810 22:59:33.026931    5770 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files for local assets ...
	I0810 22:59:33.027049    5770 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/302912.pem -> 302912.pem in /etc/ssl/certs
	I0810 22:59:33.027175    5770 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0810 22:59:33.036648    5770 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/302912.pem --> /etc/ssl/certs/302912.pem (1708 bytes)
	I0810 22:59:33.059579    5770 start.go:270] post-start completed in 145.545878ms
	I0810 22:59:33.059669    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .GetConfigRaw
	I0810 22:59:33.060385    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .GetIP
	I0810 22:59:33.066820    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | domain calico-20210810225506-30291 has defined MAC address 52:54:00:5a:2e:fb in network mk-calico-20210810225506-30291
	I0810 22:59:33.067214    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:2e:fb", ip: ""} in network mk-calico-20210810225506-30291: {Iface:virbr6 ExpiryTime:2021-08-10 23:59:29 +0000 UTC Type:0 Mac:52:54:00:5a:2e:fb Iaid: IPaddr:192.168.94.122 Prefix:24 Hostname:calico-20210810225506-30291 Clientid:01:52:54:00:5a:2e:fb}
	I0810 22:59:33.067240    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | domain calico-20210810225506-30291 has defined IP address 192.168.94.122 and MAC address 52:54:00:5a:2e:fb in network mk-calico-20210810225506-30291
	I0810 22:59:33.067606    5770 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210810225506-30291/config.json ...
	I0810 22:59:33.067926    5770 start.go:129] duration metric: createHost completed in 19.888902906s
	I0810 22:59:33.067947    5770 start.go:80] releasing machines lock for "calico-20210810225506-30291", held for 19.889011832s
	I0810 22:59:33.068005    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .DriverName
	I0810 22:59:33.068296    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .GetIP
	I0810 22:59:33.073866    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | domain calico-20210810225506-30291 has defined MAC address 52:54:00:5a:2e:fb in network mk-calico-20210810225506-30291
	I0810 22:59:33.074338    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:2e:fb", ip: ""} in network mk-calico-20210810225506-30291: {Iface:virbr6 ExpiryTime:2021-08-10 23:59:29 +0000 UTC Type:0 Mac:52:54:00:5a:2e:fb Iaid: IPaddr:192.168.94.122 Prefix:24 Hostname:calico-20210810225506-30291 Clientid:01:52:54:00:5a:2e:fb}
	I0810 22:59:33.074412    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | domain calico-20210810225506-30291 has defined IP address 192.168.94.122 and MAC address 52:54:00:5a:2e:fb in network mk-calico-20210810225506-30291
	I0810 22:59:33.074767    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .DriverName
	I0810 22:59:33.074913    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .DriverName
	I0810 22:59:33.075837    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .DriverName
	I0810 22:59:33.076131    5770 ssh_runner.go:149] Run: systemctl --version
	I0810 22:59:33.076158    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .GetSSHHostname
	I0810 22:59:33.076166    5770 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0810 22:59:33.076222    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .GetSSHHostname
	I0810 22:59:33.083693    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | domain calico-20210810225506-30291 has defined MAC address 52:54:00:5a:2e:fb in network mk-calico-20210810225506-30291
	I0810 22:59:33.083780    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | domain calico-20210810225506-30291 has defined MAC address 52:54:00:5a:2e:fb in network mk-calico-20210810225506-30291
	I0810 22:59:33.084351    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:2e:fb", ip: ""} in network mk-calico-20210810225506-30291: {Iface:virbr6 ExpiryTime:2021-08-10 23:59:29 +0000 UTC Type:0 Mac:52:54:00:5a:2e:fb Iaid: IPaddr:192.168.94.122 Prefix:24 Hostname:calico-20210810225506-30291 Clientid:01:52:54:00:5a:2e:fb}
	I0810 22:59:33.084374    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | domain calico-20210810225506-30291 has defined IP address 192.168.94.122 and MAC address 52:54:00:5a:2e:fb in network mk-calico-20210810225506-30291
	I0810 22:59:33.084535    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:2e:fb", ip: ""} in network mk-calico-20210810225506-30291: {Iface:virbr6 ExpiryTime:2021-08-10 23:59:29 +0000 UTC Type:0 Mac:52:54:00:5a:2e:fb Iaid: IPaddr:192.168.94.122 Prefix:24 Hostname:calico-20210810225506-30291 Clientid:01:52:54:00:5a:2e:fb}
	I0810 22:59:33.084555    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | domain calico-20210810225506-30291 has defined IP address 192.168.94.122 and MAC address 52:54:00:5a:2e:fb in network mk-calico-20210810225506-30291
	I0810 22:59:33.084587    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .GetSSHPort
	I0810 22:59:33.084746    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .GetSSHKeyPath
	I0810 22:59:33.084795    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .GetSSHPort
	I0810 22:59:33.084919    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .GetSSHUsername
	I0810 22:59:33.087647    5770 sshutil.go:53] new ssh client: &{IP:192.168.94.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/calico-20210810225506-30291/id_rsa Username:docker}
	I0810 22:59:33.088023    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .GetSSHKeyPath
	I0810 22:59:33.088199    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .GetSSHUsername
	I0810 22:59:33.088353    5770 sshutil.go:53] new ssh client: &{IP:192.168.94.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/calico-20210810225506-30291/id_rsa Username:docker}
	I0810 22:59:33.200966    5770 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0810 22:59:33.201098    5770 ssh_runner.go:149] Run: sudo crictl images --output json
	I0810 22:59:37.232392    5770 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.031264526s)
	I0810 22:59:37.232523    5770 crio.go:420] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.21.3". assuming images are not preloaded.
	I0810 22:59:37.232582    5770 ssh_runner.go:149] Run: which lz4
	I0810 22:59:37.238631    5770 ssh_runner.go:149] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0810 22:59:37.244490    5770 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0810 22:59:37.244524    5770 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (576184326 bytes)
	I0810 22:59:39.516697    5770 crio.go:362] Took 2.278102 seconds to copy over tarball
	I0810 22:59:39.516833    5770 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0810 22:59:45.784061    5770 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (6.267175831s)
	I0810 22:59:45.784091    5770 crio.go:369] Took 6.267308 seconds t extract the tarball
	I0810 22:59:45.784105    5770 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0810 22:59:45.832357    5770 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0810 22:59:45.846531    5770 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0810 22:59:45.858202    5770 docker.go:153] disabling docker service ...
	I0810 22:59:45.858276    5770 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0810 22:59:45.870491    5770 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0810 22:59:45.880519    5770 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0810 22:59:46.030105    5770 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0810 22:59:46.174756    5770 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0810 22:59:46.186259    5770 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0810 22:59:46.201238    5770 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0810 22:59:46.209524    5770 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0810 22:59:46.216665    5770 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0810 22:59:46.216727    5770 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0810 22:59:46.235032    5770 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0810 22:59:46.243092    5770 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0810 22:59:46.381544    5770 ssh_runner.go:149] Run: sudo systemctl start crio
	I0810 22:59:46.660650    5770 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0810 22:59:46.660727    5770 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0810 22:59:46.667383    5770 start.go:417] Will wait 60s for crictl version
	I0810 22:59:46.667439    5770 ssh_runner.go:149] Run: sudo crictl version
	I0810 22:59:46.706894    5770 start.go:426] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.2
	RuntimeApiVersion:  v1alpha1
	I0810 22:59:46.706986    5770 ssh_runner.go:149] Run: crio --version
	I0810 22:59:46.953810    5770 ssh_runner.go:149] Run: crio --version
	I0810 22:59:50.104819    5770 out.go:177] * Preparing Kubernetes v1.21.3 on CRI-O 1.20.2 ...
	I0810 22:59:50.104980    5770 main.go:130] libmachine: (calico-20210810225506-30291) Calling .GetIP
	I0810 22:59:50.430612    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | domain calico-20210810225506-30291 has defined MAC address 52:54:00:5a:2e:fb in network mk-calico-20210810225506-30291
	I0810 22:59:50.431040    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:2e:fb", ip: ""} in network mk-calico-20210810225506-30291: {Iface:virbr6 ExpiryTime:2021-08-10 23:59:29 +0000 UTC Type:0 Mac:52:54:00:5a:2e:fb Iaid: IPaddr:192.168.94.122 Prefix:24 Hostname:calico-20210810225506-30291 Clientid:01:52:54:00:5a:2e:fb}
	I0810 22:59:50.431097    5770 main.go:130] libmachine: (calico-20210810225506-30291) DBG | domain calico-20210810225506-30291 has defined IP address 192.168.94.122 and MAC address 52:54:00:5a:2e:fb in network mk-calico-20210810225506-30291
	I0810 22:59:50.431349    5770 ssh_runner.go:149] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0810 22:59:50.438241    5770 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0810 22:59:50.452840    5770 localpath.go:92] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/client.crt -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210810225506-30291/client.crt
	I0810 22:59:50.452985    5770 localpath.go:117] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/client.key -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210810225506-30291/client.key
	I0810 22:59:50.453129    5770 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0810 22:59:50.453196    5770 ssh_runner.go:149] Run: sudo crictl images --output json
	I0810 22:59:50.554662    5770 crio.go:424] all images are preloaded for cri-o runtime.
	I0810 22:59:50.554697    5770 crio.go:333] Images already preloaded, skipping extraction
	I0810 22:59:50.554754    5770 ssh_runner.go:149] Run: sudo crictl images --output json
	I0810 22:59:50.603728    5770 crio.go:424] all images are preloaded for cri-o runtime.
	I0810 22:59:50.603759    5770 cache_images.go:74] Images are preloaded, skipping loading
	I0810 22:59:50.603839    5770 ssh_runner.go:149] Run: crio config
	I0810 22:59:50.977486    5770 cni.go:93] Creating CNI manager for "calico"
	I0810 22:59:50.977511    5770 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0810 22:59:50.977525    5770 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.122 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-20210810225506-30291 NodeName:calico-20210810225506-30291 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.122"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.94.122 CgroupDriver:systemd ClientCAFile:/var/
lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0810 22:59:50.977679    5770 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.122
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "calico-20210810225506-30291"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.122
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.122"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0810 22:59:50.977790    5770 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --hostname-override=calico-20210810225506-30291 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.94.122 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:calico-20210810225506-30291 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	I0810 22:59:50.977851    5770 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0810 22:59:50.985633    5770 binaries.go:44] Found k8s binaries, skipping transfer
	I0810 22:59:50.985696    5770 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0810 22:59:50.992990    5770 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (508 bytes)
	I0810 22:59:51.005985    5770 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0810 22:59:51.022034    5770 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2074 bytes)
	I0810 22:59:51.035266    5770 ssh_runner.go:149] Run: grep 192.168.94.122	control-plane.minikube.internal$ /etc/hosts
	I0810 22:59:51.039728    5770 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.122	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0810 22:59:51.059310    5770 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210810225506-30291 for IP: 192.168.94.122
	I0810 22:59:51.059375    5770 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key
	I0810 22:59:51.059399    5770 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key
	I0810 22:59:51.059467    5770 certs.go:290] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210810225506-30291/client.key
	I0810 22:59:51.059495    5770 certs.go:294] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210810225506-30291/apiserver.key.ef5e188d
	I0810 22:59:51.059510    5770 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210810225506-30291/apiserver.crt.ef5e188d with IP's: [192.168.94.122 10.96.0.1 127.0.0.1 10.0.0.1]
	I0810 22:59:51.321470    5770 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210810225506-30291/apiserver.crt.ef5e188d ...
	I0810 22:59:51.321508    5770 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210810225506-30291/apiserver.crt.ef5e188d: {Name:mk022987da2ca455cbe13b8eb6bd3a6d845470f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:59:51.321730    5770 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210810225506-30291/apiserver.key.ef5e188d ...
	I0810 22:59:51.321746    5770 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210810225506-30291/apiserver.key.ef5e188d: {Name:mk8b17f028ee62b3f19c8c1d50c1b3dc952c2cf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:59:51.321857    5770 certs.go:305] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210810225506-30291/apiserver.crt.ef5e188d -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210810225506-30291/apiserver.crt
	I0810 22:59:51.321934    5770 certs.go:309] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210810225506-30291/apiserver.key.ef5e188d -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210810225506-30291/apiserver.key
	I0810 22:59:51.321999    5770 certs.go:294] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210810225506-30291/proxy-client.key
	I0810 22:59:51.322011    5770 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210810225506-30291/proxy-client.crt with IP's: []
	I0810 22:59:51.454082    5770 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210810225506-30291/proxy-client.crt ...
	I0810 22:59:51.454119    5770 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210810225506-30291/proxy-client.crt: {Name:mkdfd227e621cb6ad67e93e133daa596709394bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:59:51.454371    5770 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210810225506-30291/proxy-client.key ...
	I0810 22:59:51.454396    5770 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210810225506-30291/proxy-client.key: {Name:mk07c943ba7d55d8d2a8767cb48e69333650f4ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0810 22:59:51.454634    5770 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/30291.pem (1338 bytes)
	W0810 22:59:51.454695    5770 certs.go:369] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/30291_empty.pem, impossibly tiny 0 bytes
	I0810 22:59:51.454717    5770 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem (1679 bytes)
	I0810 22:59:51.454762    5770 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem (1082 bytes)
	I0810 22:59:51.454799    5770 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem (1123 bytes)
	I0810 22:59:51.454836    5770 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem (1679 bytes)
	I0810 22:59:51.454902    5770 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/302912.pem (1708 bytes)
	I0810 22:59:51.455896    5770 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210810225506-30291/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0810 22:59:51.477191    5770 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210810225506-30291/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0810 22:59:51.500260    5770 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210810225506-30291/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0810 22:59:51.522330    5770 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210810225506-30291/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0810 22:59:51.543981    5770 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0810 22:59:51.566712    5770 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0810 22:59:51.589964    5770 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0810 22:59:51.612948    5770 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0810 22:59:51.635975    5770 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0810 22:59:51.658402    5770 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/30291.pem --> /usr/share/ca-certificates/30291.pem (1338 bytes)
	I0810 22:59:51.681825    5770 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/302912.pem --> /usr/share/ca-certificates/302912.pem (1708 bytes)
	I0810 22:59:51.703968    5770 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0810 22:59:51.719580    5770 ssh_runner.go:149] Run: openssl version
	I0810 22:59:51.727896    5770 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0810 22:59:51.739428    5770 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0810 22:59:51.747344    5770 certs.go:416] hashing: -rw-r--r-- 1 root root 1111 Aug 10 22:18 /usr/share/ca-certificates/minikubeCA.pem
	I0810 22:59:51.747402    5770 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0810 22:59:51.755636    5770 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0810 22:59:51.766283    5770 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/30291.pem && ln -fs /usr/share/ca-certificates/30291.pem /etc/ssl/certs/30291.pem"
	I0810 22:59:51.778428    5770 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/30291.pem
	I0810 22:59:51.784665    5770 certs.go:416] hashing: -rw-r--r-- 1 root root 1338 Aug 10 22:27 /usr/share/ca-certificates/30291.pem
	I0810 22:59:51.784712    5770 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/30291.pem
	I0810 22:59:51.792849    5770 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/30291.pem /etc/ssl/certs/51391683.0"
	I0810 22:59:51.803692    5770 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/302912.pem && ln -fs /usr/share/ca-certificates/302912.pem /etc/ssl/certs/302912.pem"
	I0810 22:59:51.814134    5770 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/302912.pem
	I0810 22:59:51.820186    5770 certs.go:416] hashing: -rw-r--r-- 1 root root 1708 Aug 10 22:27 /usr/share/ca-certificates/302912.pem
	I0810 22:59:51.820233    5770 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/302912.pem
	I0810 22:59:51.828031    5770 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/302912.pem /etc/ssl/certs/3ec20f2e.0"
	I0810 22:59:51.838425    5770 kubeadm.go:390] StartCluster: {Name:calico-20210810225506-30291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12122/minikube-v1.22.0-1628238775-12122.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:calico-2021081
0225506-30291 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.122 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0810 22:59:51.838524    5770 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0810 22:59:51.838582    5770 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0810 22:59:51.879860    5770 cri.go:76] found id: ""
	I0810 22:59:51.879942    5770 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0810 22:59:51.889161    5770 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0810 22:59:51.899110    5770 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0810 22:59:51.907666    5770 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0810 22:59:51.907716    5770 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0810 22:59:52.805279    5770 out.go:204]   - Generating certificates and keys ...
	I0810 22:59:56.104471    5770 out.go:204]   - Booting up control plane ...
	I0810 23:00:15.398032    5770 out.go:204]   - Configuring RBAC rules ...
	I0810 23:00:16.393940    5770 cni.go:93] Creating CNI manager for "calico"
	I0810 23:00:16.396582    5770 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I0810 23:00:16.396713    5770 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0810 23:00:16.396726    5770 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (22469 bytes)
	I0810 23:00:16.429513    5770 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	W0810 23:00:17.136101    5770 out.go:242] ! initialization failed, will try again: apply cni: cni apply: cmd: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml output: -- stdout --
	configmap/calico-config created
	
	-- /stdout --
	** stderr ** 
	error: error validating "/var/tmp/minikube/cni.yaml": error validating data: [ValidationError(CustomResourceDefinition.spec): unknown field "version" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec, ValidationError(CustomResourceDefinition.spec): missing required field "versions" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec]; if you choose to ignore these errors, turn validation off with --validate=false
	
	** /stderr **: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	configmap/calico-config created
	
	stderr:
	error: error validating "/var/tmp/minikube/cni.yaml": error validating data: [ValidationError(CustomResourceDefinition.spec): unknown field "version" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec, ValidationError(CustomResourceDefinition.spec): missing required field "versions" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec]; if you choose to ignore these errors, turn validation off with --validate=false
	
	! initialization failed, will try again: apply cni: cni apply: cmd: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml output: -- stdout --
	configmap/calico-config created
	
	-- /stdout --
	** stderr ** 
	error: error validating "/var/tmp/minikube/cni.yaml": error validating data: [ValidationError(CustomResourceDefinition.spec): unknown field "version" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec, ValidationError(CustomResourceDefinition.spec): missing required field "versions" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec]; if you choose to ignore these errors, turn validation off with --validate=false
	
	** /stderr **: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	configmap/calico-config created
	
	stderr:
	error: error validating "/var/tmp/minikube/cni.yaml": error validating data: [ValidationError(CustomResourceDefinition.spec): unknown field "version" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec, ValidationError(CustomResourceDefinition.spec): missing required field "versions" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec]; if you choose to ignore these errors, turn validation off with --validate=false
	
	I0810 23:00:17.136184    5770 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0810 23:00:42.088040    5770 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (24.951825555s)
	I0810 23:00:42.088137    5770 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0810 23:00:42.108890    5770 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0810 23:00:42.108979    5770 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0810 23:00:42.157763    5770 cri.go:76] found id: ""
	I0810 23:00:42.157879    5770 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0810 23:00:42.170339    5770 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0810 23:00:42.170388    5770 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0810 23:00:42.850840    5770 out.go:204]   - Generating certificates and keys ...
	I0810 23:00:43.998402    5770 out.go:204]   - Booting up control plane ...
	I0810 23:01:01.875286    5770 out.go:204]   - Configuring RBAC rules ...
	I0810 23:01:02.910618    5770 cni.go:93] Creating CNI manager for "calico"
	I0810 23:01:02.912277    5770 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I0810 23:01:02.912346    5770 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0810 23:01:02.912358    5770 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (22469 bytes)
	I0810 23:01:02.928163    5770 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0810 23:01:03.357063    5770 kubeadm.go:392] StartCluster complete in 1m11.518640091s
	I0810 23:01:03.357157    5770 cri.go:41] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0810 23:01:03.357220    5770 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0810 23:01:03.409888    5770 cri.go:76] found id: "25b2a3ad9c0a90c691101d3904a59009262c4a0ad6e687dfe8b06737185e28e9"
	I0810 23:01:03.409920    5770 cri.go:76] found id: ""
	I0810 23:01:03.409929    5770 logs.go:270] 1 containers: [25b2a3ad9c0a90c691101d3904a59009262c4a0ad6e687dfe8b06737185e28e9]
	I0810 23:01:03.409984    5770 ssh_runner.go:149] Run: which crictl
	I0810 23:01:03.415118    5770 cri.go:41] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0810 23:01:03.415188    5770 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0810 23:01:03.457906    5770 cri.go:76] found id: "94b3480f3cc9174b7b56f0ef5f9712a23bee33f050dd22375e699091946b0824"
	I0810 23:01:03.457931    5770 cri.go:76] found id: ""
	I0810 23:01:03.457939    5770 logs.go:270] 1 containers: [94b3480f3cc9174b7b56f0ef5f9712a23bee33f050dd22375e699091946b0824]
	I0810 23:01:03.457990    5770 ssh_runner.go:149] Run: which crictl
	I0810 23:01:03.463128    5770 cri.go:41] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0810 23:01:03.463197    5770 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0810 23:01:03.511617    5770 cri.go:76] found id: ""
	I0810 23:01:03.511660    5770 logs.go:270] 0 containers: []
	W0810 23:01:03.511672    5770 logs.go:272] No container was found matching "coredns"
	I0810 23:01:03.511682    5770 cri.go:41] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0810 23:01:03.511774    5770 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0810 23:01:03.565315    5770 cri.go:76] found id: "6c444471565880bdbaccb52919ef132b31fa9d611dc041836447da842c9992f3"
	I0810 23:01:03.565346    5770 cri.go:76] found id: ""
	I0810 23:01:03.565354    5770 logs.go:270] 1 containers: [6c444471565880bdbaccb52919ef132b31fa9d611dc041836447da842c9992f3]
	I0810 23:01:03.565414    5770 ssh_runner.go:149] Run: which crictl
	I0810 23:01:03.570519    5770 cri.go:41] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0810 23:01:03.570596    5770 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0810 23:01:03.621410    5770 cri.go:76] found id: ""
	I0810 23:01:03.621438    5770 logs.go:270] 0 containers: []
	W0810 23:01:03.621446    5770 logs.go:272] No container was found matching "kube-proxy"
	I0810 23:01:03.621452    5770 cri.go:41] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0810 23:01:03.621506    5770 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0810 23:01:03.667031    5770 cri.go:76] found id: ""
	I0810 23:01:03.667065    5770 logs.go:270] 0 containers: []
	W0810 23:01:03.667073    5770 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0810 23:01:03.667082    5770 cri.go:41] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0810 23:01:03.667142    5770 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0810 23:01:03.718708    5770 cri.go:76] found id: ""
	I0810 23:01:03.718737    5770 logs.go:270] 0 containers: []
	W0810 23:01:03.718744    5770 logs.go:272] No container was found matching "storage-provisioner"
	I0810 23:01:03.718754    5770 cri.go:41] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0810 23:01:03.718807    5770 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0810 23:01:03.782733    5770 cri.go:76] found id: "4612f6b61216708d478898de966aaa36a1b53cc9de48fc70463f81cd387ab8a9"
	I0810 23:01:03.782764    5770 cri.go:76] found id: ""
	I0810 23:01:03.782772    5770 logs.go:270] 1 containers: [4612f6b61216708d478898de966aaa36a1b53cc9de48fc70463f81cd387ab8a9]
	I0810 23:01:03.782827    5770 ssh_runner.go:149] Run: which crictl
	I0810 23:01:03.808591    5770 logs.go:123] Gathering logs for etcd [94b3480f3cc9174b7b56f0ef5f9712a23bee33f050dd22375e699091946b0824] ...
	I0810 23:01:03.808621    5770 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 94b3480f3cc9174b7b56f0ef5f9712a23bee33f050dd22375e699091946b0824"
	I0810 23:01:03.881807    5770 logs.go:123] Gathering logs for kube-controller-manager [4612f6b61216708d478898de966aaa36a1b53cc9de48fc70463f81cd387ab8a9] ...
	I0810 23:01:03.881850    5770 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 4612f6b61216708d478898de966aaa36a1b53cc9de48fc70463f81cd387ab8a9"
	I0810 23:01:03.934609    5770 logs.go:123] Gathering logs for container status ...
	I0810 23:01:03.934656    5770 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0810 23:01:03.994690    5770 logs.go:123] Gathering logs for kubelet ...
	I0810 23:01:03.994735    5770 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0810 23:01:04.106976    5770 logs.go:123] Gathering logs for dmesg ...
	I0810 23:01:04.107022    5770 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0810 23:01:04.137912    5770 logs.go:123] Gathering logs for describe nodes ...
	I0810 23:01:04.137953    5770 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0810 23:01:04.406325    5770 logs.go:123] Gathering logs for kube-apiserver [25b2a3ad9c0a90c691101d3904a59009262c4a0ad6e687dfe8b06737185e28e9] ...
	I0810 23:01:04.406364    5770 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 25b2a3ad9c0a90c691101d3904a59009262c4a0ad6e687dfe8b06737185e28e9"
	I0810 23:01:04.523392    5770 logs.go:123] Gathering logs for kube-scheduler [6c444471565880bdbaccb52919ef132b31fa9d611dc041836447da842c9992f3] ...
	I0810 23:01:04.523435    5770 ssh_runner.go:149] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 6c444471565880bdbaccb52919ef132b31fa9d611dc041836447da842c9992f3"
	I0810 23:01:04.596273    5770 logs.go:123] Gathering logs for CRI-O ...
	I0810 23:01:04.596320    5770 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0810 23:01:04.811667    5770 out.go:371] Error starting cluster: apply cni: cni apply: cmd: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml output: -- stdout --
	configmap/calico-config created
	
	-- /stdout --
	** stderr ** 
	error: error validating "/var/tmp/minikube/cni.yaml": error validating data: [ValidationError(CustomResourceDefinition.spec): unknown field "version" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec, ValidationError(CustomResourceDefinition.spec): missing required field "versions" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec]; if you choose to ignore these errors, turn validation off with --validate=false
	
	** /stderr **: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	configmap/calico-config created
	
	stderr:
	error: error validating "/var/tmp/minikube/cni.yaml": error validating data: [ValidationError(CustomResourceDefinition.spec): unknown field "version" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec, ValidationError(CustomResourceDefinition.spec): missing required field "versions" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec]; if you choose to ignore these errors, turn validation off with --validate=false
	W0810 23:01:04.811717    5770 out.go:242] * 
	* 
	W0810 23:01:04.811873    5770 out.go:242] X Error starting cluster: apply cni: cni apply: cmd: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml output: -- stdout --
	configmap/calico-config created
	
	-- /stdout --
	** stderr ** 
	error: error validating "/var/tmp/minikube/cni.yaml": error validating data: [ValidationError(CustomResourceDefinition.spec): unknown field "version" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec, ValidationError(CustomResourceDefinition.spec): missing required field "versions" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec]; if you choose to ignore these errors, turn validation off with --validate=false
	
	** /stderr **: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	configmap/calico-config created
	
	stderr:
	error: error validating "/var/tmp/minikube/cni.yaml": error validating data: [ValidationError(CustomResourceDefinition.spec): unknown field "version" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec, ValidationError(CustomResourceDefinition.spec): missing required field "versions" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec]; if you choose to ignore these errors, turn validation off with --validate=false
	
	X Error starting cluster: apply cni: cni apply: cmd: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml output: -- stdout --
	configmap/calico-config created
	
	-- /stdout --
	** stderr ** 
	error: error validating "/var/tmp/minikube/cni.yaml": error validating data: [ValidationError(CustomResourceDefinition.spec): unknown field "version" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec, ValidationError(CustomResourceDefinition.spec): missing required field "versions" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec]; if you choose to ignore these errors, turn validation off with --validate=false
	
	** /stderr **: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	configmap/calico-config created
	
	stderr:
	error: error validating "/var/tmp/minikube/cni.yaml": error validating data: [ValidationError(CustomResourceDefinition.spec): unknown field "version" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec, ValidationError(CustomResourceDefinition.spec): missing required field "versions" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec]; if you choose to ignore these errors, turn validation off with --validate=false
	
	W0810 23:01:04.811888    5770 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0810 23:01:04.814557    5770 out.go:242] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                                        │
	│                                                                                                                                                      │
	│    * Please attach the following file to the GitHub issue:                                                                                           │
	│    * - /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/logs/lastStart.txt    │
	│                                                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                                        │
	│                                                                                                                                                      │
	│    * Please attach the following file to the GitHub issue:                                                                                           │
	│    * - /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/logs/lastStart.txt    │
	│                                                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0810 23:01:04.822893    5770 out.go:177] 
	W0810 23:01:04.823064    5770 out.go:242] X Exiting due to GUEST_START: apply cni: cni apply: cmd: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml output: -- stdout --
	configmap/calico-config created
	
	-- /stdout --
	** stderr ** 
	error: error validating "/var/tmp/minikube/cni.yaml": error validating data: [ValidationError(CustomResourceDefinition.spec): unknown field "version" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec, ValidationError(CustomResourceDefinition.spec): missing required field "versions" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec]; if you choose to ignore these errors, turn validation off with --validate=false
	
	** /stderr **: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	configmap/calico-config created
	
	stderr:
	error: error validating "/var/tmp/minikube/cni.yaml": error validating data: [ValidationError(CustomResourceDefinition.spec): unknown field "version" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec, ValidationError(CustomResourceDefinition.spec): missing required field "versions" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec]; if you choose to ignore these errors, turn validation off with --validate=false
	
	X Exiting due to GUEST_START: apply cni: cni apply: cmd: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml output: -- stdout --
	configmap/calico-config created
	
	-- /stdout --
	** stderr ** 
	error: error validating "/var/tmp/minikube/cni.yaml": error validating data: [ValidationError(CustomResourceDefinition.spec): unknown field "version" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec, ValidationError(CustomResourceDefinition.spec): missing required field "versions" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec]; if you choose to ignore these errors, turn validation off with --validate=false
	
	** /stderr **: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	configmap/calico-config created
	
	stderr:
	error: error validating "/var/tmp/minikube/cni.yaml": error validating data: [ValidationError(CustomResourceDefinition.spec): unknown field "version" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec, ValidationError(CustomResourceDefinition.spec): missing required field "versions" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec]; if you choose to ignore these errors, turn validation off with --validate=false
	
	W0810 23:01:04.823080    5770 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0810 23:01:04.825683    5770 out.go:242] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                                        │
	│                                                                                                                                                      │
	│    * Please attach the following file to the GitHub issue:                                                                                           │
	│    * - /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/logs/lastStart.txt    │
	│                                                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                                        │
	│                                                                                                                                                      │
	│    * Please attach the following file to the GitHub issue:                                                                                           │
	│    * - /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/logs/lastStart.txt    │
	│                                                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0810 23:01:04.827536    5770 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:100: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (111.88s)

                                                
                                    

Test pass (230/263)

Order passed test Duration
3 TestDownloadOnly/v1.14.0/json-events 6.5
4 TestDownloadOnly/v1.14.0/preload-exists 0
8 TestDownloadOnly/v1.14.0/LogsDuration 0.07
10 TestDownloadOnly/v1.21.3/json-events 6.65
11 TestDownloadOnly/v1.21.3/preload-exists 0
15 TestDownloadOnly/v1.21.3/LogsDuration 0.07
17 TestDownloadOnly/v1.22.0-rc.0/json-events 6.38
18 TestDownloadOnly/v1.22.0-rc.0/preload-exists 0
22 TestDownloadOnly/v1.22.0-rc.0/LogsDuration 0.07
23 TestDownloadOnly/DeleteAll 0.24
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.23
26 TestOffline 98.97
29 TestAddons/parallel/Registry 26.42
31 TestAddons/parallel/MetricsServer 6.4
32 TestAddons/parallel/HelmTiller 12.68
33 TestAddons/parallel/Olm 63.2
34 TestAddons/parallel/CSI 85.11
35 TestAddons/parallel/GCPAuth 91.43
36 TestCertOptions 74.79
38 TestForceSystemdFlag 71.56
39 TestForceSystemdEnv 65.48
40 TestKVMDriverInstallOrUpdate 4.14
44 TestErrorSpam/setup 56.71
45 TestErrorSpam/start 0.42
46 TestErrorSpam/status 0.77
47 TestErrorSpam/pause 3.53
48 TestErrorSpam/unpause 1.75
49 TestErrorSpam/stop 6.25
52 TestFunctional/serial/CopySyncFile 0
53 TestFunctional/serial/StartWithProxy 74.6
54 TestFunctional/serial/AuditLog 0
55 TestFunctional/serial/SoftStart 20.4
56 TestFunctional/serial/KubeContext 0.04
57 TestFunctional/serial/KubectlGetPods 0.2
60 TestFunctional/serial/CacheCmd/cache/add_remote 4.38
61 TestFunctional/serial/CacheCmd/cache/add_local 2.31
62 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.06
63 TestFunctional/serial/CacheCmd/cache/list 0.06
64 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
65 TestFunctional/serial/CacheCmd/cache/cache_reload 1.98
66 TestFunctional/serial/CacheCmd/cache/delete 0.11
67 TestFunctional/serial/MinikubeKubectlCmd 0.11
68 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
69 TestFunctional/serial/ExtraConfig 35.72
70 TestFunctional/serial/ComponentHealth 0.08
71 TestFunctional/serial/LogsCmd 1.44
72 TestFunctional/serial/LogsFileCmd 1.46
74 TestFunctional/parallel/ConfigCmd 0.41
75 TestFunctional/parallel/DashboardCmd 6.13
76 TestFunctional/parallel/DryRun 0.34
77 TestFunctional/parallel/InternationalLanguage 0.17
78 TestFunctional/parallel/StatusCmd 0.93
81 TestFunctional/parallel/ServiceCmd 37.82
82 TestFunctional/parallel/AddonsCmd 0.16
83 TestFunctional/parallel/PersistentVolumeClaim 69.64
85 TestFunctional/parallel/SSHCmd 0.61
86 TestFunctional/parallel/CpCmd 0.6
87 TestFunctional/parallel/MySQL 34.69
88 TestFunctional/parallel/FileSync 0.28
89 TestFunctional/parallel/CertSync 1.71
93 TestFunctional/parallel/NodeLabels 0.08
94 TestFunctional/parallel/LoadImage 3.39
95 TestFunctional/parallel/RemoveImage 3.77
96 TestFunctional/parallel/LoadImageFromFile 2.63
97 TestFunctional/parallel/BuildImage 6.53
98 TestFunctional/parallel/ListImages 0.52
99 TestFunctional/parallel/NonActiveRuntimeDisabled 0.54
101 TestFunctional/parallel/Version/short 0.06
102 TestFunctional/parallel/Version/components 1
103 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
104 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
105 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
107 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
109 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
110 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
114 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
115 TestFunctional/parallel/ProfileCmd/profile_not_create 0.34
116 TestFunctional/parallel/ProfileCmd/profile_list 0.3
117 TestFunctional/parallel/ProfileCmd/profile_json_output 0.3
118 TestFunctional/parallel/MountCmd/any-port 11.01
119 TestFunctional/parallel/MountCmd/specific-port 1.88
120 TestFunctional/delete_busybox_image 0.09
121 TestFunctional/delete_my-image_image 0.04
122 TestFunctional/delete_minikube_cached_images 0.04
126 TestJSONOutput/start/Audit 0
128 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
129 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
131 TestJSONOutput/pause/Audit 0
133 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
134 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
136 TestJSONOutput/unpause/Audit 0
138 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
139 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
141 TestJSONOutput/stop/Audit 0
143 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
144 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
145 TestErrorJSONOutput 0.33
148 TestMainNoArgs 0.05
151 TestMultiNode/serial/FreshStart2Nodes 127.73
154 TestMultiNode/serial/AddNode 48.13
155 TestMultiNode/serial/ProfileList 0.23
156 TestMultiNode/serial/CopyFile 1.84
157 TestMultiNode/serial/StopNode 2.93
158 TestMultiNode/serial/StartAfterStop 50.28
159 TestMultiNode/serial/RestartKeepsNodes 178.06
160 TestMultiNode/serial/DeleteNode 1.86
161 TestMultiNode/serial/StopMultiNode 5.29
162 TestMultiNode/serial/RestartMultiNode 117.43
163 TestMultiNode/serial/ValidateNameConflict 60.37
169 TestDebPackageInstall/install_amd64_debian:sid/minikube 0
170 TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver 11.07
172 TestDebPackageInstall/install_amd64_debian:latest/minikube 0
173 TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver 9.8
175 TestDebPackageInstall/install_amd64_debian:10/minikube 0
176 TestDebPackageInstall/install_amd64_debian:10/kvm2-driver 9.94
178 TestDebPackageInstall/install_amd64_debian:9/minikube 0
179 TestDebPackageInstall/install_amd64_debian:9/kvm2-driver 8.14
181 TestDebPackageInstall/install_amd64_ubuntu:latest/minikube 0
182 TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver 16.88
184 TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube 0
185 TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver 16.5
187 TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube 0
188 TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver 16.87
190 TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube 0
191 TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver 15.35
194 TestScheduledStopUnix 92.76
198 TestRunningBinaryUpgrade 278.63
200 TestKubernetesUpgrade 219.47
203 TestPause/serial/Start 108.92
204 TestPause/serial/SecondStartNoReconfiguration 18.38
205 TestPause/serial/Pause 1.97
206 TestPause/serial/VerifyStatus 0.32
207 TestPause/serial/Unpause 3.37
208 TestPause/serial/PauseAgain 5.83
209 TestPause/serial/DeletePaused 0.98
210 TestPause/serial/VerifyDeletedResources 0.23
218 TestNetworkPlugins/group/false 0.88
229 TestNetworkPlugins/group/auto/Start 87.31
230 TestNetworkPlugins/group/kindnet/Start 120.59
231 TestNetworkPlugins/group/auto/KubeletFlags 0.27
232 TestNetworkPlugins/group/auto/NetCatPod 14.29
233 TestNetworkPlugins/group/auto/DNS 0.32
234 TestNetworkPlugins/group/auto/Localhost 0.29
235 TestNetworkPlugins/group/auto/HairPin 0.27
236 TestNetworkPlugins/group/cilium/Start 195.73
237 TestStoppedBinaryUpgrade/MinikubeLogs 1.25
239 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
240 TestNetworkPlugins/group/kindnet/KubeletFlags 0.25
241 TestNetworkPlugins/group/kindnet/NetCatPod 13.7
242 TestNetworkPlugins/group/kindnet/DNS 0.3
243 TestNetworkPlugins/group/kindnet/Localhost 0.28
244 TestNetworkPlugins/group/kindnet/HairPin 0.3
245 TestNetworkPlugins/group/custom-weave/Start 101.2
246 TestNetworkPlugins/group/enable-default-cni/Start 103.89
247 TestNetworkPlugins/group/flannel/Start 93.86
248 TestNetworkPlugins/group/custom-weave/KubeletFlags 0.28
249 TestNetworkPlugins/group/custom-weave/NetCatPod 19.98
250 TestNetworkPlugins/group/cilium/ControllerPod 5.04
251 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.25
252 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.56
253 TestNetworkPlugins/group/cilium/KubeletFlags 0.24
254 TestNetworkPlugins/group/cilium/NetCatPod 15.71
255 TestNetworkPlugins/group/bridge/Start 96.43
256 TestNetworkPlugins/group/enable-default-cni/DNS 0.3
257 TestNetworkPlugins/group/enable-default-cni/Localhost 0.36
258 TestNetworkPlugins/group/enable-default-cni/HairPin 0.33
260 TestStartStop/group/old-k8s-version/serial/FirstStart 192.81
261 TestNetworkPlugins/group/cilium/DNS 0.41
262 TestNetworkPlugins/group/cilium/Localhost 0.27
263 TestNetworkPlugins/group/cilium/HairPin 0.27
265 TestStartStop/group/no-preload/serial/FirstStart 167.85
266 TestNetworkPlugins/group/flannel/ControllerPod 8.03
267 TestNetworkPlugins/group/flannel/KubeletFlags 0.47
268 TestNetworkPlugins/group/flannel/NetCatPod 12.82
269 TestNetworkPlugins/group/flannel/DNS 0.5
270 TestNetworkPlugins/group/flannel/Localhost 0.34
271 TestNetworkPlugins/group/flannel/HairPin 0.35
273 TestStartStop/group/default-k8s-different-port/serial/FirstStart 97
274 TestNetworkPlugins/group/bridge/KubeletFlags 0.23
275 TestNetworkPlugins/group/bridge/NetCatPod 15.07
276 TestNetworkPlugins/group/bridge/DNS 2.52
277 TestNetworkPlugins/group/bridge/Localhost 0.24
278 TestNetworkPlugins/group/bridge/HairPin 0.27
280 TestStartStop/group/newest-cni/serial/FirstStart 92.52
281 TestStartStop/group/default-k8s-different-port/serial/DeployApp 11.76
282 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 1.18
283 TestStartStop/group/default-k8s-different-port/serial/Stop 68.4
284 TestStartStop/group/no-preload/serial/DeployApp 11.71
285 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.16
286 TestStartStop/group/no-preload/serial/Stop 3.11
287 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.16
288 TestStartStop/group/no-preload/serial/SecondStart 348.64
289 TestStartStop/group/old-k8s-version/serial/DeployApp 11.66
290 TestStartStop/group/newest-cni/serial/DeployApp 0
291 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.83
292 TestStartStop/group/newest-cni/serial/Stop 4.13
293 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.17
294 TestStartStop/group/old-k8s-version/serial/Stop 3.15
295 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
296 TestStartStop/group/newest-cni/serial/SecondStart 86.95
297 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
298 TestStartStop/group/old-k8s-version/serial/SecondStart 462.9
299 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.19
300 TestStartStop/group/default-k8s-different-port/serial/SecondStart 391.55
301 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
302 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
303 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.37
304 TestStartStop/group/newest-cni/serial/Pause 3.02
306 TestStartStop/group/embed-certs/serial/FirstStart 87
307 TestStartStop/group/embed-certs/serial/DeployApp 12.64
308 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.11
309 TestStartStop/group/embed-certs/serial/Stop 63.43
310 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.16
311 TestStartStop/group/embed-certs/serial/SecondStart 380.05
312 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.02
313 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
314 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
315 TestStartStop/group/no-preload/serial/Pause 2.98
316 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 5.02
317 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 5.11
318 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 0.27
319 TestStartStop/group/default-k8s-different-port/serial/Pause 2.66
320 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
321 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
322 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
323 TestStartStop/group/old-k8s-version/serial/Pause 2.7
324 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.02
325 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
326 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
327 TestStartStop/group/embed-certs/serial/Pause 2.64
x
+
TestDownloadOnly/v1.14.0/json-events (6.5s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210810221716-30291 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210810221716-30291 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (6.495261543s)
--- PASS: TestDownloadOnly/v1.14.0/json-events (6.50s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/preload-exists
--- PASS: TestDownloadOnly/v1.14.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20210810221716-30291
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20210810221716-30291: exit status 85 (67.903002ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/10 22:17:16
	Running on machine: debian-jenkins-agent-3
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0810 22:17:16.168806   30303 out.go:298] Setting OutFile to fd 1 ...
	I0810 22:17:16.168878   30303 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:17:16.168882   30303 out.go:311] Setting ErrFile to fd 2...
	I0810 22:17:16.168885   30303 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:17:16.168990   30303 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin
	W0810 22:17:16.169110   30303 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/config/config.json: no such file or directory
	I0810 22:17:16.169348   30303 out.go:305] Setting JSON to true
	I0810 22:17:16.204650   30303 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-3","uptime":7196,"bootTime":1628626640,"procs":158,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0810 22:17:16.204745   30303 start.go:121] virtualization: kvm guest
	I0810 22:17:16.207939   30303 notify.go:169] Checking for updates...
	I0810 22:17:16.209885   30303 driver.go:335] Setting default libvirt URI to qemu:///system
	I0810 22:17:16.240434   30303 start.go:278] selected driver: kvm2
	I0810 22:17:16.240457   30303 start.go:751] validating driver "kvm2" against <nil>
	I0810 22:17:16.241276   30303 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:17:16.241440   30303 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0810 22:17:16.252557   30303 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0810 22:17:16.252607   30303 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0810 22:17:16.253029   30303 start_flags.go:344] Using suggested 6000MB memory alloc based on sys=32179MB, container=0MB
	I0810 22:17:16.253155   30303 start_flags.go:679] Wait components to verify : map[apiserver:true system_pods:true]
	I0810 22:17:16.253193   30303 cni.go:93] Creating CNI manager for ""
	I0810 22:17:16.253280   30303 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0810 22:17:16.253287   30303 start_flags.go:272] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0810 22:17:16.253296   30303 start_flags.go:277] config:
	{Name:download-only-20210810221716-30291 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:download-only-20210810221716-30291 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0810 22:17:16.253454   30303 iso.go:123] acquiring lock: {Name:mke8829815ca14456120fefc524d0a056bf82da0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:17:16.255460   30303 download.go:92] Downloading: https://storage.googleapis.com/minikube-builds/iso/12122/minikube-v1.22.0-1628238775-12122.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/12122/minikube-v1.22.0-1628238775-12122.iso.sha256 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/iso/minikube-v1.22.0-1628238775-12122.iso
	I0810 22:17:17.998598   30303 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime crio
	I0810 22:17:18.069497   30303 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.14.0-cri-o-overlay-amd64.tar.lz4
	I0810 22:17:18.069545   30303 cache.go:56] Caching tarball of preloaded images
	I0810 22:17:18.069753   30303 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime crio
	I0810 22:17:18.071932   30303 preload.go:237] getting checksum for preloaded-images-k8s-v11-v1.14.0-cri-o-overlay-amd64.tar.lz4 ...
	I0810 22:17:18.138890   30303 download.go:92] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.14.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:70b8731eaaa1b4de2d1cd60021fc1260 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-cri-o-overlay-amd64.tar.lz4
	I0810 22:17:20.925841   30303 preload.go:247] saving checksum for preloaded-images-k8s-v11-v1.14.0-cri-o-overlay-amd64.tar.lz4 ...
	I0810 22:17:20.925939   30303 preload.go:254] verifying checksumm of /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-cri-o-overlay-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210810221716-30291"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.14.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/json-events (6.65s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210810221716-30291 --force --alsologtostderr --kubernetes-version=v1.21.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210810221716-30291 --force --alsologtostderr --kubernetes-version=v1.21.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (6.64512511s)
--- PASS: TestDownloadOnly/v1.21.3/json-events (6.65s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/preload-exists
--- PASS: TestDownloadOnly/v1.21.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20210810221716-30291
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20210810221716-30291: exit status 85 (65.79293ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/10 22:17:22
	Running on machine: debian-jenkins-agent-3
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0810 22:17:22.733995   30339 out.go:298] Setting OutFile to fd 1 ...
	I0810 22:17:22.734069   30339 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:17:22.734073   30339 out.go:311] Setting ErrFile to fd 2...
	I0810 22:17:22.734075   30339 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:17:22.734169   30339 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin
	W0810 22:17:22.734271   30339 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/config/config.json: no such file or directory
	I0810 22:17:22.734367   30339 out.go:305] Setting JSON to true
	I0810 22:17:22.769186   30339 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-3","uptime":7203,"bootTime":1628626640,"procs":158,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0810 22:17:22.769299   30339 start.go:121] virtualization: kvm guest
	I0810 22:17:22.772080   30339 notify.go:169] Checking for updates...
	W0810 22:17:22.774668   30339 start.go:659] api.Load failed for download-only-20210810221716-30291: filestore "download-only-20210810221716-30291": Docker machine "download-only-20210810221716-30291" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0810 22:17:22.774722   30339 driver.go:335] Setting default libvirt URI to qemu:///system
	W0810 22:17:22.774774   30339 start.go:659] api.Load failed for download-only-20210810221716-30291: filestore "download-only-20210810221716-30291": Docker machine "download-only-20210810221716-30291" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0810 22:17:22.804704   30339 start.go:278] selected driver: kvm2
	I0810 22:17:22.804728   30339 start.go:751] validating driver "kvm2" against &{Name:download-only-20210810221716-30291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12122/minikube-v1.22.0-1628238775-12122.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 C
lusterName:download-only-20210810221716-30291 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0810 22:17:22.805614   30339 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:17:22.805820   30339 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0810 22:17:22.816604   30339 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0810 22:17:22.817294   30339 cni.go:93] Creating CNI manager for ""
	I0810 22:17:22.817309   30339 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0810 22:17:22.817336   30339 start_flags.go:277] config:
	{Name:download-only-20210810221716-30291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12122/minikube-v1.22.0-1628238775-12122.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:download-only-20210810221716-30291 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0810 22:17:22.817456   30339 iso.go:123] acquiring lock: {Name:mke8829815ca14456120fefc524d0a056bf82da0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:17:22.819377   30339 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0810 22:17:22.908849   30339 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4
	I0810 22:17:22.908888   30339 cache.go:56] Caching tarball of preloaded images
	I0810 22:17:22.909152   30339 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0810 22:17:22.911542   30339 preload.go:237] getting checksum for preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 ...
	I0810 22:17:22.987206   30339 download.go:92] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:5b844d0f443dc130a4f324a367701516 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4
	I0810 22:17:27.387892   30339 preload.go:247] saving checksum for preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 ...
	I0810 22:17:27.387985   30339 preload.go:254] verifying checksumm of /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210810221716-30291"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.21.3/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/json-events (6.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210810221716-30291 --force --alsologtostderr --kubernetes-version=v1.22.0-rc.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210810221716-30291 --force --alsologtostderr --kubernetes-version=v1.22.0-rc.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (6.379521904s)
--- PASS: TestDownloadOnly/v1.22.0-rc.0/json-events (6.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.22.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20210810221716-30291
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20210810221716-30291: exit status 85 (65.36737ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/10 22:17:29
	Running on machine: debian-jenkins-agent-3
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0810 22:17:29.444533   30375 out.go:298] Setting OutFile to fd 1 ...
	I0810 22:17:29.444609   30375 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:17:29.444613   30375 out.go:311] Setting ErrFile to fd 2...
	I0810 22:17:29.444616   30375 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:17:29.444727   30375 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin
	W0810 22:17:29.444837   30375 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/config/config.json: no such file or directory
	I0810 22:17:29.444936   30375 out.go:305] Setting JSON to true
	I0810 22:17:29.479960   30375 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-3","uptime":7209,"bootTime":1628626640,"procs":158,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0810 22:17:29.480067   30375 start.go:121] virtualization: kvm guest
	I0810 22:17:29.482698   30375 notify.go:169] Checking for updates...
	W0810 22:17:29.484973   30375 start.go:659] api.Load failed for download-only-20210810221716-30291: filestore "download-only-20210810221716-30291": Docker machine "download-only-20210810221716-30291" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0810 22:17:29.485024   30375 driver.go:335] Setting default libvirt URI to qemu:///system
	W0810 22:17:29.485054   30375 start.go:659] api.Load failed for download-only-20210810221716-30291: filestore "download-only-20210810221716-30291": Docker machine "download-only-20210810221716-30291" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0810 22:17:29.518039   30375 start.go:278] selected driver: kvm2
	I0810 22:17:29.518056   30375 start.go:751] validating driver "kvm2" against &{Name:download-only-20210810221716-30291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12122/minikube-v1.22.0-1628238775-12122.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 C
lusterName:download-only-20210810221716-30291 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0810 22:17:29.518813   30375 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:17:29.519022   30375 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0810 22:17:29.530285   30375 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0810 22:17:29.530934   30375 cni.go:93] Creating CNI manager for ""
	I0810 22:17:29.530949   30375 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0810 22:17:29.530956   30375 start_flags.go:277] config:
	{Name:download-only-20210810221716-30291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12122/minikube-v1.22.0-1628238775-12122.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:download-only-20210810221716-30291 Namespace:default APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0810 22:17:29.531086   30375 iso.go:123] acquiring lock: {Name:mke8829815ca14456120fefc524d0a056bf82da0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0810 22:17:29.533003   30375 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0810 22:17:29.604273   30375 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0810 22:17:29.604309   30375 cache.go:56] Caching tarball of preloaded images
	I0810 22:17:29.604472   30375 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0810 22:17:29.606649   30375 preload.go:237] getting checksum for preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4 ...
	I0810 22:17:29.675444   30375 download.go:92] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:c7902b63f7bbc786f5f337da25a17477 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0810 22:17:34.170715   30375 preload.go:247] saving checksum for preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4 ...
	I0810 22:17:34.170805   30375 preload.go:254] verifying checksumm of /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210810221716-30291"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.22.0-rc.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:189: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:201: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-20210810221716-30291
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestOffline (98.97s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-20210810225245-30291 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-20210810225245-30291 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m37.523863652s)
helpers_test.go:176: Cleaning up "offline-crio-20210810225245-30291" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-20210810225245-30291
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-20210810225245-30291: (1.448889991s)
--- PASS: TestOffline (98.97s)

                                                
                                    
x
+
TestAddons/parallel/Registry (26.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:284: registry stabilized in 17.834074ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:286: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:340: "registry-bzgg5" [a35be290-e994-4568-af3f-633135f23d51] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:286: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.017706318s

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:289: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:340: "registry-proxy-msxjj" [e874a500-5631-4f32-a4f5-0b41e6ba7964] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:289: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.016260609s
addons_test.go:294: (dbg) Run:  kubectl --context addons-20210810221736-30291 delete po -l run=registry-test --now
addons_test.go:299: (dbg) Run:  kubectl --context addons-20210810221736-30291 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:299: (dbg) Done: kubectl --context addons-20210810221736-30291 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (15.48476403s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210810221736-30291 ip
2021/08/10 22:21:38 [DEBUG] GET http://192.168.50.30:5000
addons_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210810221736-30291 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (26.42s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.4s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:361: metrics-server stabilized in 2.617745ms
addons_test.go:363: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:340: "metrics-server-77c99ccb96-dr6pm" [a3519db9-c366-43f2-bc64-14fa8206ceee] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:363: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.013158314s
addons_test.go:369: (dbg) Run:  kubectl --context addons-20210810221736-30291 top pods -n kube-system
addons_test.go:386: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210810221736-30291 addons disable metrics-server --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:386: (dbg) Done: out/minikube-linux-amd64 -p addons-20210810221736-30291 addons disable metrics-server --alsologtostderr -v=1: (1.287170319s)
--- PASS: TestAddons/parallel/MetricsServer (6.40s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.68s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:410: tiller-deploy stabilized in 16.01605ms

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:412: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
helpers_test.go:340: "tiller-deploy-768d69497-5cmnh" [3e7a3e5d-ad13-4423-8a48-5780f711aabf] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:412: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.0207994s
addons_test.go:427: (dbg) Run:  kubectl --context addons-20210810221736-30291 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:427: (dbg) Done: kubectl --context addons-20210810221736-30291 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: (7.028243334s)
addons_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210810221736-30291 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (12.68s)

                                                
                                    
x
+
TestAddons/parallel/Olm (63.2s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:463: catalog-operator stabilized in 2.454116ms
addons_test.go:467: olm-operator stabilized in 5.029739ms
addons_test.go:471: packageserver stabilized in 7.565099ms
addons_test.go:473: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "app=catalog-operator" in namespace "olm" ...
helpers_test.go:340: "catalog-operator-75d496484d-96kck" [b26ca429-82c9-4e29-a1e4-e199c1594830] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:473: (dbg) TestAddons/parallel/Olm: app=catalog-operator healthy within 5.014754267s
addons_test.go:476: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "app=olm-operator" in namespace "olm" ...
helpers_test.go:340: "olm-operator-859c88c96-94cpj" [f10aa085-7448-40e8-870c-2035b6e406c0] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:476: (dbg) TestAddons/parallel/Olm: app=olm-operator healthy within 5.01185923s
addons_test.go:479: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "app=packageserver" in namespace "olm" ...
helpers_test.go:340: "packageserver-677ff7d94-6q8mg" [26b191d6-0f8d-4e25-8b1a-7d28d0e7984e] Running
helpers_test.go:340: "packageserver-677ff7d94-xkd8s" [786954a1-e70a-4b95-a9e5-acbfe8a49bf6] Running
helpers_test.go:340: "packageserver-677ff7d94-6q8mg" [26b191d6-0f8d-4e25-8b1a-7d28d0e7984e] Running
helpers_test.go:340: "packageserver-677ff7d94-xkd8s" [786954a1-e70a-4b95-a9e5-acbfe8a49bf6] Running
helpers_test.go:340: "packageserver-677ff7d94-6q8mg" [26b191d6-0f8d-4e25-8b1a-7d28d0e7984e] Running
helpers_test.go:340: "packageserver-677ff7d94-xkd8s" [786954a1-e70a-4b95-a9e5-acbfe8a49bf6] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:340: "packageserver-677ff7d94-6q8mg" [26b191d6-0f8d-4e25-8b1a-7d28d0e7984e] Running
helpers_test.go:340: "packageserver-677ff7d94-xkd8s" [786954a1-e70a-4b95-a9e5-acbfe8a49bf6] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:340: "packageserver-677ff7d94-6q8mg" [26b191d6-0f8d-4e25-8b1a-7d28d0e7984e] Running
helpers_test.go:340: "packageserver-677ff7d94-xkd8s" [786954a1-e70a-4b95-a9e5-acbfe8a49bf6] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:340: "packageserver-677ff7d94-6q8mg" [26b191d6-0f8d-4e25-8b1a-7d28d0e7984e] Running
addons_test.go:479: (dbg) TestAddons/parallel/Olm: app=packageserver healthy within 5.010308898s
addons_test.go:482: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "olm.catalogSource=operatorhubio-catalog" in namespace "olm" ...
helpers_test.go:340: "operatorhubio-catalog-dxpzx" [9eb587c8-1ac0-4c31-ad75-392aeeab016c] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:482: (dbg) TestAddons/parallel/Olm: olm.catalogSource=operatorhubio-catalog healthy within 5.009535215s
addons_test.go:487: (dbg) Run:  kubectl --context addons-20210810221736-30291 create -f testdata/etcd.yaml

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210810221736-30291 get csv -n my-etcd
addons_test.go:499: kubectl --context addons-20210810221736-30291 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210810221736-30291 get csv -n my-etcd
addons_test.go:499: kubectl --context addons-20210810221736-30291 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210810221736-30291 get csv -n my-etcd
addons_test.go:499: kubectl --context addons-20210810221736-30291 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210810221736-30291 get csv -n my-etcd
addons_test.go:499: kubectl --context addons-20210810221736-30291 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210810221736-30291 get csv -n my-etcd
addons_test.go:499: kubectl --context addons-20210810221736-30291 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210810221736-30291 get csv -n my-etcd
--- PASS: TestAddons/parallel/Olm (63.20s)

                                                
                                    
x
+
TestAddons/parallel/CSI (85.11s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:526: csi-hostpath-driver pods stabilized in 24.108688ms
addons_test.go:529: (dbg) Run:  kubectl --context addons-20210810221736-30291 create -f testdata/csi-hostpath-driver/pvc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:534: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:390: (dbg) Run:  kubectl --context addons-20210810221736-30291 get pvc hpvc -o jsonpath={.status.phase} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:390: (dbg) Run:  kubectl --context addons-20210810221736-30291 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:539: (dbg) Run:  kubectl --context addons-20210810221736-30291 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:544: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:340: "task-pv-pod" [fe05df08-588f-4343-b4f6-a372fdb18081] Pending
helpers_test.go:340: "task-pv-pod" [fe05df08-588f-4343-b4f6-a372fdb18081] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:340: "task-pv-pod" [fe05df08-588f-4343-b4f6-a372fdb18081] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:544: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 25.012720218s
addons_test.go:549: (dbg) Run:  kubectl --context addons-20210810221736-30291 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:554: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:415: (dbg) Run:  kubectl --context addons-20210810221736-30291 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:423: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:415: (dbg) Run:  kubectl --context addons-20210810221736-30291 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:559: (dbg) Run:  kubectl --context addons-20210810221736-30291 delete pod task-pv-pod

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:559: (dbg) Done: kubectl --context addons-20210810221736-30291 delete pod task-pv-pod: (8.525170918s)
addons_test.go:565: (dbg) Run:  kubectl --context addons-20210810221736-30291 delete pvc hpvc
addons_test.go:571: (dbg) Run:  kubectl --context addons-20210810221736-30291 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:576: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:390: (dbg) Run:  kubectl --context addons-20210810221736-30291 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:390: (dbg) Run:  kubectl --context addons-20210810221736-30291 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:581: (dbg) Run:  kubectl --context addons-20210810221736-30291 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:586: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:340: "task-pv-pod-restore" [5d3179f5-7f4e-41a5-aa13-2cf2c3591969] Pending
helpers_test.go:340: "task-pv-pod-restore" [5d3179f5-7f4e-41a5-aa13-2cf2c3591969] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:340: "task-pv-pod-restore" [5d3179f5-7f4e-41a5-aa13-2cf2c3591969] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:586: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 27.018231933s
addons_test.go:591: (dbg) Run:  kubectl --context addons-20210810221736-30291 delete pod task-pv-pod-restore

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:591: (dbg) Done: kubectl --context addons-20210810221736-30291 delete pod task-pv-pod-restore: (11.12289458s)
addons_test.go:595: (dbg) Run:  kubectl --context addons-20210810221736-30291 delete pvc hpvc-restore
addons_test.go:599: (dbg) Run:  kubectl --context addons-20210810221736-30291 delete volumesnapshot new-snapshot-demo
addons_test.go:603: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210810221736-30291 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:603: (dbg) Done: out/minikube-linux-amd64 -p addons-20210810221736-30291 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.234055558s)
addons_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210810221736-30291 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (85.11s)

                                                
                                    
x
+
TestAddons/parallel/GCPAuth (91.43s)

                                                
                                                
=== RUN   TestAddons/parallel/GCPAuth
=== PAUSE TestAddons/parallel/GCPAuth

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:618: (dbg) Run:  kubectl --context addons-20210810221736-30291 create -f testdata/busybox.yaml

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:624: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:340: "busybox" [b283326c-6afe-4ed1-ba90-188232830913] Pending
helpers_test.go:340: "busybox" [b283326c-6afe-4ed1-ba90-188232830913] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:340: "busybox" [b283326c-6afe-4ed1-ba90-188232830913] Running

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:624: (dbg) TestAddons/parallel/GCPAuth: integration-test=busybox healthy within 12.017504439s
addons_test.go:630: (dbg) Run:  kubectl --context addons-20210810221736-30291 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:667: (dbg) Run:  kubectl --context addons-20210810221736-30291 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:683: (dbg) Run:  kubectl --context addons-20210810221736-30291 apply -f testdata/private-image.yaml
addons_test.go:690: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image" in namespace "default" ...
helpers_test.go:340: "private-image-7ff9c8c74f-9v7sz" [f58dc798-47b6-4645-9f6d-71c0d033c8ef] Pending / Ready:ContainersNotReady (containers with unready status: [private-image]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image])

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:340: "private-image-7ff9c8c74f-9v7sz" [f58dc798-47b6-4645-9f6d-71c0d033c8ef] Running

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:690: (dbg) TestAddons/parallel/GCPAuth: integration-test=private-image healthy within 33.015861801s
addons_test.go:696: (dbg) Run:  kubectl --context addons-20210810221736-30291 apply -f testdata/private-image-eu.yaml

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:703: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image-eu" in namespace "default" ...

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:340: "private-image-eu-5956d58f9f-g6tz4" [9114b208-93a1-4ef6-a7cc-b77ef5bb5565] Pending / Ready:ContainersNotReady (containers with unready status: [private-image-eu]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image-eu])

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:340: "private-image-eu-5956d58f9f-g6tz4" [9114b208-93a1-4ef6-a7cc-b77ef5bb5565] Running

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:703: (dbg) TestAddons/parallel/GCPAuth: integration-test=private-image-eu healthy within 16.017204293s
addons_test.go:709: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210810221736-30291 addons disable gcp-auth --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:709: (dbg) Done: out/minikube-linux-amd64 -p addons-20210810221736-30291 addons disable gcp-auth --alsologtostderr -v=1: (28.757800771s)
--- PASS: TestAddons/parallel/GCPAuth (91.43s)

                                                
                                    
x
+
TestCertOptions (74.79s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-20210810225529-30291 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-20210810225529-30291 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m13.504954119s)
cert_options_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-20210810225529-30291 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:73: (dbg) Run:  kubectl --context cert-options-20210810225529-30291 config view
helpers_test.go:176: Cleaning up "cert-options-20210810225529-30291" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-20210810225529-30291
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-20210810225529-30291: (1.015255251s)
--- PASS: TestCertOptions (74.79s)

                                                
                                    
x
+
TestForceSystemdFlag (71.56s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-20210810225510-30291 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-20210810225510-30291 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m10.251498357s)
helpers_test.go:176: Cleaning up "force-systemd-flag-20210810225510-30291" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-20210810225510-30291
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-20210810225510-30291: (1.304609621s)
--- PASS: TestForceSystemdFlag (71.56s)

                                                
                                    
x
+
TestForceSystemdEnv (65.48s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-20210810225424-30291 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-20210810225424-30291 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m4.476040807s)
helpers_test.go:176: Cleaning up "force-systemd-env-20210810225424-30291" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-20210810225424-30291
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-20210810225424-30291: (1.00284648s)
--- PASS: TestForceSystemdEnv (65.48s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.14s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.14s)

                                                
                                    
x
+
TestErrorSpam/setup (56.71s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:78: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210810222557-30291 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20210810222557-30291 --driver=kvm2  --container-runtime=crio
E0810 22:26:13.066392   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810221736-30291/client.crt: no such file or directory
E0810 22:26:13.072022   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810221736-30291/client.crt: no such file or directory
E0810 22:26:13.082227   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810221736-30291/client.crt: no such file or directory
E0810 22:26:13.102453   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810221736-30291/client.crt: no such file or directory
E0810 22:26:13.142699   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810221736-30291/client.crt: no such file or directory
E0810 22:26:13.222989   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810221736-30291/client.crt: no such file or directory
E0810 22:26:13.383385   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810221736-30291/client.crt: no such file or directory
E0810 22:26:13.703988   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810221736-30291/client.crt: no such file or directory
E0810 22:26:14.344921   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810221736-30291/client.crt: no such file or directory
E0810 22:26:15.625401   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810221736-30291/client.crt: no such file or directory
E0810 22:26:18.187335   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810221736-30291/client.crt: no such file or directory
E0810 22:26:23.307691   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810221736-30291/client.crt: no such file or directory
E0810 22:26:33.548654   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810221736-30291/client.crt: no such file or directory
E0810 22:26:54.028831   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810221736-30291/client.crt: no such file or directory
error_spam_test.go:78: (dbg) Done: out/minikube-linux-amd64 start -p nospam-20210810222557-30291 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20210810222557-30291 --driver=kvm2  --container-runtime=crio: (56.706098753s)
--- PASS: TestErrorSpam/setup (56.71s)

                                                
                                    
x
+
TestErrorSpam/start (0.42s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:213: Cleaning up 1 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210810222557-30291 --log_dir /tmp/nospam-20210810222557-30291 start --dry-run
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210810222557-30291 --log_dir /tmp/nospam-20210810222557-30291 start --dry-run
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210810222557-30291 --log_dir /tmp/nospam-20210810222557-30291 start --dry-run
--- PASS: TestErrorSpam/start (0.42s)

                                                
                                    
x
+
TestErrorSpam/status (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210810222557-30291 --log_dir /tmp/nospam-20210810222557-30291 status
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210810222557-30291 --log_dir /tmp/nospam-20210810222557-30291 status
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210810222557-30291 --log_dir /tmp/nospam-20210810222557-30291 status
--- PASS: TestErrorSpam/status (0.77s)

                                                
                                    
x
+
TestErrorSpam/pause (3.53s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210810222557-30291 --log_dir /tmp/nospam-20210810222557-30291 pause
error_spam_test.go:156: (dbg) Done: out/minikube-linux-amd64 -p nospam-20210810222557-30291 --log_dir /tmp/nospam-20210810222557-30291 pause: (2.48103863s)
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210810222557-30291 --log_dir /tmp/nospam-20210810222557-30291 pause
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210810222557-30291 --log_dir /tmp/nospam-20210810222557-30291 pause
--- PASS: TestErrorSpam/pause (3.53s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.75s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210810222557-30291 --log_dir /tmp/nospam-20210810222557-30291 unpause
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210810222557-30291 --log_dir /tmp/nospam-20210810222557-30291 unpause
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210810222557-30291 --log_dir /tmp/nospam-20210810222557-30291 unpause
--- PASS: TestErrorSpam/unpause (1.75s)

                                                
                                    
x
+
TestErrorSpam/stop (6.25s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210810222557-30291 --log_dir /tmp/nospam-20210810222557-30291 stop
error_spam_test.go:156: (dbg) Done: out/minikube-linux-amd64 -p nospam-20210810222557-30291 --log_dir /tmp/nospam-20210810222557-30291 stop: (6.099971898s)
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210810222557-30291 --log_dir /tmp/nospam-20210810222557-30291 stop
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210810222557-30291 --log_dir /tmp/nospam-20210810222557-30291 stop
--- PASS: TestErrorSpam/stop (6.25s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1606: local sync path: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/test/nested/copy/30291/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (74.6s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:1982: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210810222707-30291 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0810 22:27:34.990487   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810221736-30291/client.crt: no such file or directory
functional_test.go:1982: (dbg) Done: out/minikube-linux-amd64 start -p functional-20210810222707-30291 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m14.59804066s)
--- PASS: TestFunctional/serial/StartWithProxy (74.60s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (20.4s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:627: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210810222707-30291 --alsologtostderr -v=8
functional_test.go:627: (dbg) Done: out/minikube-linux-amd64 start -p functional-20210810222707-30291 --alsologtostderr -v=8: (20.403784076s)
functional_test.go:631: soft start took 20.404540864s for "functional-20210810222707-30291" cluster.
--- PASS: TestFunctional/serial/SoftStart (20.40s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:647: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.2s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:660: (dbg) Run:  kubectl --context functional-20210810222707-30291 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.38s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:982: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 cache add k8s.gcr.io/pause:3.1
functional_test.go:982: (dbg) Done: out/minikube-linux-amd64 -p functional-20210810222707-30291 cache add k8s.gcr.io/pause:3.1: (1.041948826s)
functional_test.go:982: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 cache add k8s.gcr.io/pause:3.3
functional_test.go:982: (dbg) Done: out/minikube-linux-amd64 -p functional-20210810222707-30291 cache add k8s.gcr.io/pause:3.3: (1.740364853s)
functional_test.go:982: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 cache add k8s.gcr.io/pause:latest
functional_test.go:982: (dbg) Done: out/minikube-linux-amd64 -p functional-20210810222707-30291 cache add k8s.gcr.io/pause:latest: (1.601992917s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.38s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1012: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20210810222707-30291 /tmp/functional-20210810222707-30291198517529
functional_test.go:1024: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 cache add minikube-local-cache-test:functional-20210810222707-30291
functional_test.go:1024: (dbg) Done: out/minikube-linux-amd64 -p functional-20210810222707-30291 cache add minikube-local-cache-test:functional-20210810222707-30291: (2.023696755s)
functional_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 cache delete minikube-local-cache-test:functional-20210810222707-30291
functional_test.go:1018: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20210810222707-30291
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1036: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1043: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1056: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.98s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1078: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 ssh sudo crictl rmi k8s.gcr.io/pause:latest
functional_test.go:1084: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1084: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210810222707-30291 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (231.116543ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 cache reload
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-20210810222707-30291 cache reload: (1.253187286s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.98s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1103: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1103: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:678: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 kubectl -- --context functional-20210810222707-30291 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:701: (dbg) Run:  out/kubectl --context functional-20210810222707-30291 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.72s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:715: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210810222707-30291 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0810 22:28:56.911263   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810221736-30291/client.crt: no such file or directory
functional_test.go:715: (dbg) Done: out/minikube-linux-amd64 start -p functional-20210810222707-30291 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.715157767s)
functional_test.go:719: restart took 35.715270156s for "functional-20210810222707-30291" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (35.72s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:766: (dbg) Run:  kubectl --context functional-20210810222707-30291 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:780: etcd phase: Running
functional_test.go:790: etcd status: Ready
functional_test.go:780: kube-apiserver phase: Running
functional_test.go:790: kube-apiserver status: Ready
functional_test.go:780: kube-controller-manager phase: Running
functional_test.go:790: kube-controller-manager status: Ready
functional_test.go:780: kube-scheduler phase: Running
functional_test.go:790: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1165: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 logs
functional_test.go:1165: (dbg) Done: out/minikube-linux-amd64 -p functional-20210810222707-30291 logs: (1.439955705s)
--- PASS: TestFunctional/serial/LogsCmd (1.44s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1181: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 logs --file /tmp/functional-20210810222707-30291843013028/logs.txt
functional_test.go:1181: (dbg) Done: out/minikube-linux-amd64 -p functional-20210810222707-30291 logs --file /tmp/functional-20210810222707-30291843013028/logs.txt: (1.46030539s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 config get cpus
functional_test.go:1129: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210810222707-30291 config get cpus: exit status 14 (83.229191ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 config set cpus 2
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 config unset cpus
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 config get cpus
functional_test.go:1129: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210810222707-30291 config get cpus: exit status 14 (59.235492ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (6.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:857: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20210810222707-30291 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:862: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20210810222707-30291 --alsologtostderr -v=1] ...
helpers_test.go:504: unable to kill pid 3193: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (6.13s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:919: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210810222707-30291 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:919: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20210810222707-30291 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (173.716412ms)

                                                
                                                
-- stdout --
	* [functional-20210810222707-30291] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	  - MINIKUBE_LOCATION=12230
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0810 22:30:11.321905    3064 out.go:298] Setting OutFile to fd 1 ...
	I0810 22:30:11.321991    3064 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:30:11.322004    3064 out.go:311] Setting ErrFile to fd 2...
	I0810 22:30:11.322008    3064 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:30:11.322149    3064 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin
	I0810 22:30:11.322447    3064 out.go:305] Setting JSON to false
	I0810 22:30:11.362773    3064 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-3","uptime":7971,"bootTime":1628626640,"procs":187,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0810 22:30:11.362940    3064 start.go:121] virtualization: kvm guest
	I0810 22:30:11.365172    3064 out.go:177] * [functional-20210810222707-30291] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0810 22:30:11.366695    3064 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0810 22:30:11.368077    3064 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0810 22:30:11.369412    3064 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	I0810 22:30:11.370703    3064 out.go:177]   - MINIKUBE_LOCATION=12230
	I0810 22:30:11.371491    3064 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:30:11.371574    3064 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:30:11.384975    3064 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35973
	I0810 22:30:11.385446    3064 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:30:11.386058    3064 main.go:130] libmachine: Using API Version  1
	I0810 22:30:11.386083    3064 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:30:11.386462    3064 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:30:11.386678    3064 main.go:130] libmachine: (functional-20210810222707-30291) Calling .DriverName
	I0810 22:30:11.386884    3064 driver.go:335] Setting default libvirt URI to qemu:///system
	I0810 22:30:11.387298    3064 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:30:11.387342    3064 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:30:11.399775    3064 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40625
	I0810 22:30:11.400275    3064 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:30:11.400728    3064 main.go:130] libmachine: Using API Version  1
	I0810 22:30:11.400756    3064 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:30:11.401138    3064 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:30:11.401334    3064 main.go:130] libmachine: (functional-20210810222707-30291) Calling .DriverName
	I0810 22:30:11.435323    3064 out.go:177] * Using the kvm2 driver based on existing profile
	I0810 22:30:11.435358    3064 start.go:278] selected driver: kvm2
	I0810 22:30:11.435366    3064 start.go:751] validating driver "kvm2" against &{Name:functional-20210810222707-30291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12122/minikube-v1.22.0-1628238775-12122.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 Clus
terName:functional-20210810222707-30291 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.105 Port:8441 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false re
gistry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0810 22:30:11.435538    3064 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0810 22:30:11.438169    3064 out.go:177] 
	W0810 22:30:11.438274    3064 out.go:242] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0810 22:30:11.439540    3064 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:934: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210810222707-30291 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:956: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210810222707-30291 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:956: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20210810222707-30291 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (172.763541ms)

                                                
                                                
-- stdout --
	* [functional-20210810222707-30291] minikube v1.22.0 sur Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	  - MINIKUBE_LOCATION=12230
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0810 22:30:11.158454    3018 out.go:298] Setting OutFile to fd 1 ...
	I0810 22:30:11.158538    3018 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:30:11.158549    3018 out.go:311] Setting ErrFile to fd 2...
	I0810 22:30:11.158553    3018 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:30:11.158709    3018 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin
	I0810 22:30:11.158962    3018 out.go:305] Setting JSON to false
	I0810 22:30:11.195353    3018 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-3","uptime":7971,"bootTime":1628626640,"procs":185,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0810 22:30:11.195535    3018 start.go:121] virtualization: kvm guest
	I0810 22:30:11.198450    3018 out.go:177] * [functional-20210810222707-30291] minikube v1.22.0 sur Debian 9.13 (kvm/amd64)
	I0810 22:30:11.199895    3018 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0810 22:30:11.201294    3018 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0810 22:30:11.202812    3018 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	I0810 22:30:11.204323    3018 out.go:177]   - MINIKUBE_LOCATION=12230
	I0810 22:30:11.205117    3018 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:30:11.205211    3018 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:30:11.216505    3018 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43237
	I0810 22:30:11.216948    3018 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:30:11.217571    3018 main.go:130] libmachine: Using API Version  1
	I0810 22:30:11.217595    3018 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:30:11.217969    3018 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:30:11.218161    3018 main.go:130] libmachine: (functional-20210810222707-30291) Calling .DriverName
	I0810 22:30:11.218352    3018 driver.go:335] Setting default libvirt URI to qemu:///system
	I0810 22:30:11.218804    3018 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:30:11.218852    3018 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:30:11.229229    3018 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43873
	I0810 22:30:11.229641    3018 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:30:11.230156    3018 main.go:130] libmachine: Using API Version  1
	I0810 22:30:11.230188    3018 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:30:11.230589    3018 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:30:11.230760    3018 main.go:130] libmachine: (functional-20210810222707-30291) Calling .DriverName
	I0810 22:30:11.260982    3018 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0810 22:30:11.261016    3018 start.go:278] selected driver: kvm2
	I0810 22:30:11.261023    3018 start.go:751] validating driver "kvm2" against &{Name:functional-20210810222707-30291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12122/minikube-v1.22.0-1628238775-12122.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 Clus
terName:functional-20210810222707-30291 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.105 Port:8441 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false re
gistry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0810 22:30:11.261199    3018 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0810 22:30:11.264010    3018 out.go:177] 
	W0810 22:30:11.264217    3018 out.go:242] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0810 22:30:11.265826    3018 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:809: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 status
functional_test.go:815: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:826: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (37.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1357: (dbg) Run:  kubectl --context functional-20210810222707-30291 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1363: (dbg) Run:  kubectl --context functional-20210810222707-30291 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1368: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:340: "hello-node-6cbfcd7cbc-ljpkr" [f07eb69c-b978-4d21-a39f-d3ba8f19cc9a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:340: "hello-node-6cbfcd7cbc-ljpkr" [f07eb69c-b978-4d21-a39f-d3ba8f19cc9a] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1368: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 35.28758339s
functional_test.go:1372: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 service list
functional_test.go:1372: (dbg) Done: out/minikube-linux-amd64 -p functional-20210810222707-30291 service list: (1.355777649s)
functional_test.go:1385: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 service --namespace=default --https --url hello-node
functional_test.go:1394: found endpoint: https://192.168.50.105:31424
functional_test.go:1405: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 service hello-node --url --format={{.IP}}
functional_test.go:1414: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 service hello-node --url
functional_test.go:1420: found endpoint for hello-node: http://192.168.50.105:31424
functional_test.go:1431: Attempting to fetch http://192.168.50.105:31424 ...
functional_test.go:1450: http://192.168.50.105:31424: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-6cbfcd7cbc-ljpkr

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.105:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.50.105:31424
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmd (37.82s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1465: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 addons list
functional_test.go:1476: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (69.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:340: "storage-provisioner" [8106192f-b2c1-483b-84e3-671ea4354e54] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.020700946s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-20210810222707-30291 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-20210810222707-30291 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20210810222707-30291 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20210810222707-30291 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20210810222707-30291 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:340: "sp-pod" [7e6427cc-e910-4293-b8bb-58e538e3f40b] Pending
helpers_test.go:340: "sp-pod" [7e6427cc-e910-4293-b8bb-58e538e3f40b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:340: "sp-pod" [7e6427cc-e910-4293-b8bb-58e538e3f40b] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 39.010351005s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-20210810222707-30291 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-20210810222707-30291 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-20210810222707-30291 delete -f testdata/storage-provisioner/pod.yaml: (12.832582767s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20210810222707-30291 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:340: "sp-pod" [5f64d62c-0edd-44d8-8152-10d5bad56100] Pending
helpers_test.go:340: "sp-pod" [5f64d62c-0edd-44d8-8152-10d5bad56100] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:340: "sp-pod" [5f64d62c-0edd-44d8-8152-10d5bad56100] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.011138624s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-20210810222707-30291 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (69.64s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1498: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1515: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:546: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 ssh "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (34.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1546: (dbg) Run:  kubectl --context functional-20210810222707-30291 replace --force -f testdata/mysql.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1551: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:340: "mysql-9bbbc5bbb-zvv8z" [06dda61a-6938-4e67-bbb5-a50111eb38ac] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:340: "mysql-9bbbc5bbb-zvv8z" [06dda61a-6938-4e67-bbb5-a50111eb38ac] Running
functional_test.go:1551: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 26.024577325s
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20210810222707-30291 exec mysql-9bbbc5bbb-zvv8z -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1558: (dbg) Non-zero exit: kubectl --context functional-20210810222707-30291 exec mysql-9bbbc5bbb-zvv8z -- mysql -ppassword -e "show databases;": exit status 1 (628.276491ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20210810222707-30291 exec mysql-9bbbc5bbb-zvv8z -- mysql -ppassword -e "show databases;"
functional_test.go:1558: (dbg) Non-zero exit: kubectl --context functional-20210810222707-30291 exec mysql-9bbbc5bbb-zvv8z -- mysql -ppassword -e "show databases;": exit status 1 (467.000017ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20210810222707-30291 exec mysql-9bbbc5bbb-zvv8z -- mysql -ppassword -e "show databases;"
functional_test.go:1558: (dbg) Non-zero exit: kubectl --context functional-20210810222707-30291 exec mysql-9bbbc5bbb-zvv8z -- mysql -ppassword -e "show databases;": exit status 1 (296.053075ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20210810222707-30291 exec mysql-9bbbc5bbb-zvv8z -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1558: (dbg) Non-zero exit: kubectl --context functional-20210810222707-30291 exec mysql-9bbbc5bbb-zvv8z -- mysql -ppassword -e "show databases;": exit status 1 (204.45311ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20210810222707-30291 exec mysql-9bbbc5bbb-zvv8z -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (34.69s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1678: Checking for existence of /etc/test/nested/copy/30291/hosts within VM
functional_test.go:1679: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 ssh "sudo cat /etc/test/nested/copy/30291/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1684: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1719: Checking for existence of /etc/ssl/certs/30291.pem within VM
functional_test.go:1720: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 ssh "sudo cat /etc/ssl/certs/30291.pem"
functional_test.go:1719: Checking for existence of /usr/share/ca-certificates/30291.pem within VM
functional_test.go:1720: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 ssh "sudo cat /usr/share/ca-certificates/30291.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1719: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1720: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 ssh "sudo cat /etc/ssl/certs/51391683.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1746: Checking for existence of /etc/ssl/certs/302912.pem within VM
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 ssh "sudo cat /etc/ssl/certs/302912.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1746: Checking for existence of /usr/share/ca-certificates/302912.pem within VM
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 ssh "sudo cat /usr/share/ca-certificates/302912.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1746: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:216: (dbg) Run:  kubectl --context functional-20210810222707-30291 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/LoadImage (3.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/LoadImage
=== PAUSE TestFunctional/parallel/LoadImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:239: (dbg) Run:  docker pull busybox:1.33

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:239: (dbg) Done: docker pull busybox:1.33: (1.332845182s)
functional_test.go:246: (dbg) Run:  docker tag busybox:1.33 docker.io/library/busybox:load-functional-20210810222707-30291
functional_test.go:252: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 image load docker.io/library/busybox:load-functional-20210810222707-30291

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:252: (dbg) Done: out/minikube-linux-amd64 -p functional-20210810222707-30291 image load docker.io/library/busybox:load-functional-20210810222707-30291: (1.696264662s)
functional_test.go:373: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210810222707-30291 -- sudo crictl inspecti docker.io/library/busybox:load-functional-20210810222707-30291
--- PASS: TestFunctional/parallel/LoadImage (3.39s)

                                                
                                    
x
+
TestFunctional/parallel/RemoveImage (3.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/RemoveImage
=== PAUSE TestFunctional/parallel/RemoveImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:331: (dbg) Run:  docker pull busybox:1.32

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:331: (dbg) Done: docker pull busybox:1.32: (1.232655431s)
functional_test.go:338: (dbg) Run:  docker tag busybox:1.32 docker.io/library/busybox:remove-functional-20210810222707-30291
functional_test.go:344: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 image load docker.io/library/busybox:remove-functional-20210810222707-30291

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:344: (dbg) Done: out/minikube-linux-amd64 -p functional-20210810222707-30291 image load docker.io/library/busybox:remove-functional-20210810222707-30291: (1.824068541s)
functional_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 image rm docker.io/library/busybox:remove-functional-20210810222707-30291

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:387: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210810222707-30291 -- sudo crictl images
--- PASS: TestFunctional/parallel/RemoveImage (3.77s)

                                                
                                    
x
+
TestFunctional/parallel/LoadImageFromFile (2.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/LoadImageFromFile
=== PAUSE TestFunctional/parallel/LoadImageFromFile

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImageFromFile
functional_test.go:279: (dbg) Run:  docker pull busybox:1.31

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImageFromFile
functional_test.go:279: (dbg) Done: docker pull busybox:1.31: (1.236731641s)
functional_test.go:286: (dbg) Run:  docker tag busybox:1.31 docker.io/library/busybox:load-from-file-functional-20210810222707-30291
functional_test.go:293: (dbg) Run:  docker save -o busybox.tar docker.io/library/busybox:load-from-file-functional-20210810222707-30291
functional_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 image load /home/jenkins/workspace/KVM_Linux_crio_integration/busybox.tar

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImageFromFile
functional_test.go:387: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210810222707-30291 -- sudo crictl images
--- PASS: TestFunctional/parallel/LoadImageFromFile (2.63s)

                                                
                                    
x
+
TestFunctional/parallel/BuildImage (6.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/BuildImage
=== PAUSE TestFunctional/parallel/BuildImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 image build -t localhost/my-image:functional-20210810222707-30291 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage
functional_test.go:407: (dbg) Done: out/minikube-linux-amd64 -p functional-20210810222707-30291 image build -t localhost/my-image:functional-20210810222707-30291 testdata/build: (6.280294046s)
functional_test.go:412: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20210810222707-30291 image build -t localhost/my-image:functional-20210810222707-30291 testdata/build:
STEP 1: FROM busybox
STEP 2: RUN true
--> f5d592cef0a
STEP 3: ADD content.txt /
STEP 4: COMMIT localhost/my-image:functional-20210810222707-30291
--> 5a0625751f9
5a0625751f95aea643221a180791a63d0e5bbbfde70a676741bf424935fe2c22
functional_test.go:415: (dbg) Stderr: out/minikube-linux-amd64 -p functional-20210810222707-30291 image build -t localhost/my-image:functional-20210810222707-30291 testdata/build:
Completed short name "busybox" with unqualified-search registries (origin: /etc/containers/registries.conf)
Getting image source signatures
Copying blob sha256:b71f96345d44b237decc0c2d6c2f9ad0d17fde83dad7579608f1f0764d9686f2
Copying config sha256:69593048aa3acfee0f75f20b77acb549de2472063053f6730c4091b53f2dfb02
Writing manifest to image destination
Storing signatures
functional_test.go:373: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210810222707-30291 -- sudo crictl inspecti localhost/my-image:functional-20210810222707-30291
--- PASS: TestFunctional/parallel/BuildImage (6.53s)

                                                
                                    
x
+
TestFunctional/parallel/ListImages (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ListImages
=== PAUSE TestFunctional/parallel/ListImages

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ListImages
functional_test.go:441: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ListImages
functional_test.go:446: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20210810222707-30291 image ls:
localhost/minikube-local-cache-test:functional-20210810222707-30291
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.4.1
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.2
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.21.3
k8s.gcr.io/kube-proxy:v1.21.3
k8s.gcr.io/kube-controller-manager:v1.21.3
k8s.gcr.io/kube-apiserver:v1.21.3
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns/coredns:v1.8.0
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/kubernetesui/metrics-scraper:v1.0.4
docker.io/kubernetesui/dashboard:v2.1.0
docker.io/kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestFunctional/parallel/ListImages (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1774: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 ssh "sudo systemctl is-active docker"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1774: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210810222707-30291 ssh "sudo systemctl is-active docker": exit status 1 (271.264711ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:1774: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 ssh "sudo systemctl is-active containerd"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1774: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210810222707-30291 ssh "sudo systemctl is-active containerd": exit status 1 (264.439668ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2003: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2016: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 version -o=json --components
functional_test.go:2016: (dbg) Done: out/minikube-linux-amd64 -p functional-20210810222707-30291 version -o=json --components: (1.004790293s)
--- PASS: TestFunctional/parallel/Version/components (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:1865: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:1865: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:1865: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 update-context --alsologtostderr -v=2
2021/08/10 22:30:17 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:126: (dbg) daemon: [out/minikube-linux-amd64 -p functional-20210810222707-30291 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:164: (dbg) Run:  kubectl --context functional-20210810222707-30291 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:229: tunnel at http://10.107.70.245 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:364: (dbg) stopping [out/minikube-linux-amd64 -p functional-20210810222707-30291 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1202: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1206: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1240: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1245: Took "243.374353ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1254: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1259: Took "55.512617ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1295: Took "244.882226ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1303: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1308: Took "56.399671ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:76: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20210810222707-30291 /tmp/mounttest127659699:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:110: wrote "test-1628634609575739637" to /tmp/mounttest127659699/created-by-test
functional_test_mount_test.go:110: wrote "test-1628634609575739637" to /tmp/mounttest127659699/created-by-test-removed-by-pod
functional_test_mount_test.go:110: wrote "test-1628634609575739637" to /tmp/mounttest127659699/test-1628634609575739637
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:118: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210810222707-30291 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (209.066764ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 ssh -- ls -la /mount-9p
functional_test_mount_test.go:136: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 10 22:30 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 10 22:30 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 10 22:30 test-1628634609575739637
functional_test_mount_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 ssh cat /mount-9p/test-1628634609575739637

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:151: (dbg) Run:  kubectl --context functional-20210810222707-30291 replace --force -f testdata/busybox-mount-test.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:156: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:340: "busybox-mount" [4ee48182-fe1e-49c0-b727-8ae2d8af5ff4] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:340: "busybox-mount" [4ee48182-fe1e-49c0-b727-8ae2d8af5ff4] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:340: "busybox-mount" [4ee48182-fe1e-49c0-b727-8ae2d8af5ff4] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:156: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.021130747s
functional_test_mount_test.go:172: (dbg) Run:  kubectl --context functional-20210810222707-30291 logs busybox-mount
functional_test_mount_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:93: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:97: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20210810222707-30291 /tmp/mounttest127659699:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (11.01s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:225: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20210810222707-30291 /tmp/mounttest468219160:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210810222707-30291 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (211.826287ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:269: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 ssh -- ls -la /mount-9p
functional_test_mount_test.go:273: guest mount directory contents
total 0
functional_test_mount_test.go:275: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20210810222707-30291 /tmp/mounttest468219160:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:276: reading mount text
functional_test_mount_test.go:290: done reading mount text
functional_test_mount_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210810222707-30291 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:242: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210810222707-30291 ssh "sudo umount -f /mount-9p": exit status 1 (205.131453ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:244: "out/minikube-linux-amd64 -p functional-20210810222707-30291 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:246: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20210810222707-30291 /tmp/mounttest468219160:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.88s)

                                                
                                    
x
+
TestFunctional/delete_busybox_image (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_busybox_image
functional_test.go:183: (dbg) Run:  docker rmi -f docker.io/library/busybox:load-functional-20210810222707-30291
functional_test.go:188: (dbg) Run:  docker rmi -f docker.io/library/busybox:remove-functional-20210810222707-30291
--- PASS: TestFunctional/delete_busybox_image (0.09s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:195: (dbg) Run:  docker rmi -f localhost/my-image:functional-20210810222707-30291
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:203: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20210810222707-30291
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.33s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:146: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-20210810223223-30291 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:146: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-20210810223223-30291 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (94.792557ms)

                                                
                                                
-- stdout --
	{"data":{"currentstep":"0","message":"[json-output-error-20210810223223-30291] minikube v1.22.0 on Debian 9.13 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"},"datacontenttype":"application/json","id":"478be184-4304-4472-8646-1113d8471ba3","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig"},"datacontenttype":"application/json","id":"c4bee396-9a8f-4c11-8717-f7bca6ced909","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"},"datacontenttype":"application/json","id":"181dd4ff-2842-4d34-b83a-0320cce1f23e","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube"},"datacontenttype":"application/json","id":"a70f8bd5-3b1d-4f16-9290-c95818b72b77","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_LOCATION=12230"},"datacontenttype":"application/json","id":"d2b3c292-1f34-44ad-b6f1-bf90e029689f","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""},"datacontenttype":"application/json","id":"bd54d6a9-5d3d-4db9-87f5-73b9693a791d","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.error"}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-20210810223223-30291" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-20210810223223-30291
--- PASS: TestErrorJSONOutput (0.33s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (127.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210810223223-30291 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210810223223-30291 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m7.306975002s)
multinode_test.go:87: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210810223223-30291 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (127.73s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (48.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:106: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20210810223223-30291 -v 3 --alsologtostderr
multinode_test.go:106: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-20210810223223-30291 -v 3 --alsologtostderr: (47.546889435s)
multinode_test.go:112: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210810223223-30291 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (48.13s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:128: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (1.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:169: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210810223223-30291 status --output json --alsologtostderr
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210810223223-30291 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:546: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210810223223-30291 ssh "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210810223223-30291 cp testdata/cp-test.txt multinode-20210810223223-30291-m02:/home/docker/cp-test.txt
helpers_test.go:546: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210810223223-30291 ssh -n multinode-20210810223223-30291-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210810223223-30291 cp testdata/cp-test.txt multinode-20210810223223-30291-m03:/home/docker/cp-test.txt
helpers_test.go:546: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210810223223-30291 ssh -n multinode-20210810223223-30291-m03 "sudo cat /home/docker/cp-test.txt"
E0810 22:39:35.789674   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222707-30291/client.crt: no such file or directory
--- PASS: TestMultiNode/serial/CopyFile (1.84s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:191: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210810223223-30291 node stop m03
multinode_test.go:191: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210810223223-30291 node stop m03: (2.089129118s)
multinode_test.go:197: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210810223223-30291 status
multinode_test.go:197: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210810223223-30291 status: exit status 7 (421.379505ms)

                                                
                                                
-- stdout --
	multinode-20210810223223-30291
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20210810223223-30291-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20210810223223-30291-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:204: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210810223223-30291 status --alsologtostderr
multinode_test.go:204: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210810223223-30291 status --alsologtostderr: exit status 7 (418.592761ms)

                                                
                                                
-- stdout --
	multinode-20210810223223-30291
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20210810223223-30291-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20210810223223-30291-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0810 22:39:38.461098    6061 out.go:298] Setting OutFile to fd 1 ...
	I0810 22:39:38.461187    6061 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:39:38.461192    6061 out.go:311] Setting ErrFile to fd 2...
	I0810 22:39:38.461196    6061 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:39:38.461299    6061 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin
	I0810 22:39:38.461459    6061 out.go:305] Setting JSON to false
	I0810 22:39:38.461477    6061 mustload.go:65] Loading cluster: multinode-20210810223223-30291
	I0810 22:39:38.461765    6061 status.go:253] checking status of multinode-20210810223223-30291 ...
	I0810 22:39:38.462121    6061 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:39:38.462182    6061 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:39:38.472979    6061 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37117
	I0810 22:39:38.473423    6061 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:39:38.473997    6061 main.go:130] libmachine: Using API Version  1
	I0810 22:39:38.474019    6061 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:39:38.474364    6061 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:39:38.474544    6061 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetState
	I0810 22:39:38.477384    6061 status.go:328] multinode-20210810223223-30291 host status = "Running" (err=<nil>)
	I0810 22:39:38.477401    6061 host.go:66] Checking if "multinode-20210810223223-30291" exists ...
	I0810 22:39:38.477799    6061 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:39:38.477840    6061 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:39:38.488228    6061 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:38971
	I0810 22:39:38.488614    6061 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:39:38.489021    6061 main.go:130] libmachine: Using API Version  1
	I0810 22:39:38.489039    6061 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:39:38.489336    6061 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:39:38.489510    6061 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetIP
	I0810 22:39:38.494543    6061 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:39:38.494941    6061 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:89", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:32:38 +0000 UTC Type:0 Mac:52:54:00:ce:d8:89 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:multinode-20210810223223-30291 Clientid:01:52:54:00:ce:d8:89}
	I0810 22:39:38.494975    6061 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined IP address 192.168.50.32 and MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:39:38.495042    6061 host.go:66] Checking if "multinode-20210810223223-30291" exists ...
	I0810 22:39:38.495402    6061 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:39:38.495432    6061 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:39:38.505657    6061 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40773
	I0810 22:39:38.506021    6061 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:39:38.506473    6061 main.go:130] libmachine: Using API Version  1
	I0810 22:39:38.506493    6061 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:39:38.506777    6061 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:39:38.506928    6061 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .DriverName
	I0810 22:39:38.507102    6061 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0810 22:39:38.507153    6061 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHHostname
	I0810 22:39:38.512109    6061 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:39:38.512481    6061 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d8:89", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:32:38 +0000 UTC Type:0 Mac:52:54:00:ce:d8:89 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:multinode-20210810223223-30291 Clientid:01:52:54:00:ce:d8:89}
	I0810 22:39:38.512528    6061 main.go:130] libmachine: (multinode-20210810223223-30291) DBG | domain multinode-20210810223223-30291 has defined IP address 192.168.50.32 and MAC address 52:54:00:ce:d8:89 in network mk-multinode-20210810223223-30291
	I0810 22:39:38.512612    6061 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHPort
	I0810 22:39:38.512769    6061 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHKeyPath
	I0810 22:39:38.512886    6061 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetSSHUsername
	I0810 22:39:38.513048    6061 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223223-30291/id_rsa Username:docker}
	I0810 22:39:38.604536    6061 ssh_runner.go:149] Run: systemctl --version
	I0810 22:39:38.610501    6061 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0810 22:39:38.621990    6061 kubeconfig.go:93] found "multinode-20210810223223-30291" server: "https://192.168.50.32:8443"
	I0810 22:39:38.622027    6061 api_server.go:164] Checking apiserver status ...
	I0810 22:39:38.622061    6061 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0810 22:39:38.632107    6061 ssh_runner.go:149] Run: sudo egrep ^[0-9]+:freezer: /proc/2634/cgroup
	I0810 22:39:38.639048    6061 api_server.go:180] apiserver freezer: "5:freezer:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9099813bef5425d688516ac434247f4d.slice/crio-3b22cef1088cdfa5b9c5d2f6974fa60c0edf2132c0881c9c61984c16c2864f97.scope"
	I0810 22:39:38.639123    6061 ssh_runner.go:149] Run: sudo cat /sys/fs/cgroup/freezer/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9099813bef5425d688516ac434247f4d.slice/crio-3b22cef1088cdfa5b9c5d2f6974fa60c0edf2132c0881c9c61984c16c2864f97.scope/freezer.state
	I0810 22:39:38.645662    6061 api_server.go:202] freezer state: "THAWED"
	I0810 22:39:38.645693    6061 api_server.go:239] Checking apiserver healthz at https://192.168.50.32:8443/healthz ...
	I0810 22:39:38.651873    6061 api_server.go:265] https://192.168.50.32:8443/healthz returned 200:
	ok
	I0810 22:39:38.651899    6061 status.go:419] multinode-20210810223223-30291 apiserver status = Running (err=<nil>)
	I0810 22:39:38.651912    6061 status.go:255] multinode-20210810223223-30291 status: &{Name:multinode-20210810223223-30291 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0810 22:39:38.651933    6061 status.go:253] checking status of multinode-20210810223223-30291-m02 ...
	I0810 22:39:38.652360    6061 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:39:38.652397    6061 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:39:38.663424    6061 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:46211
	I0810 22:39:38.663835    6061 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:39:38.664318    6061 main.go:130] libmachine: Using API Version  1
	I0810 22:39:38.664338    6061 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:39:38.664696    6061 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:39:38.664868    6061 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetState
	I0810 22:39:38.668023    6061 status.go:328] multinode-20210810223223-30291-m02 host status = "Running" (err=<nil>)
	I0810 22:39:38.668038    6061 host.go:66] Checking if "multinode-20210810223223-30291-m02" exists ...
	I0810 22:39:38.668393    6061 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:39:38.668431    6061 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:39:38.679069    6061 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41739
	I0810 22:39:38.679479    6061 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:39:38.679928    6061 main.go:130] libmachine: Using API Version  1
	I0810 22:39:38.679948    6061 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:39:38.680321    6061 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:39:38.680493    6061 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetIP
	I0810 22:39:38.685661    6061 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:39:38.686068    6061 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:3c:a9", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:33:53 +0000 UTC Type:0 Mac:52:54:00:5f:3c:a9 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:multinode-20210810223223-30291-m02 Clientid:01:52:54:00:5f:3c:a9}
	I0810 22:39:38.686156    6061 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined IP address 192.168.50.251 and MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:39:38.686252    6061 host.go:66] Checking if "multinode-20210810223223-30291-m02" exists ...
	I0810 22:39:38.686567    6061 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:39:38.686600    6061 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:39:38.697584    6061 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45123
	I0810 22:39:38.697968    6061 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:39:38.698422    6061 main.go:130] libmachine: Using API Version  1
	I0810 22:39:38.698442    6061 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:39:38.698782    6061 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:39:38.698966    6061 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .DriverName
	I0810 22:39:38.699177    6061 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0810 22:39:38.699206    6061 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHHostname
	I0810 22:39:38.704425    6061 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:39:38.704829    6061 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:3c:a9", ip: ""} in network mk-multinode-20210810223223-30291: {Iface:virbr2 ExpiryTime:2021-08-10 23:33:53 +0000 UTC Type:0 Mac:52:54:00:5f:3c:a9 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:multinode-20210810223223-30291-m02 Clientid:01:52:54:00:5f:3c:a9}
	I0810 22:39:38.704884    6061 main.go:130] libmachine: (multinode-20210810223223-30291-m02) DBG | domain multinode-20210810223223-30291-m02 has defined IP address 192.168.50.251 and MAC address 52:54:00:5f:3c:a9 in network mk-multinode-20210810223223-30291
	I0810 22:39:38.704986    6061 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHPort
	I0810 22:39:38.705148    6061 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHKeyPath
	I0810 22:39:38.705315    6061 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetSSHUsername
	I0810 22:39:38.705497    6061 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210810223223-30291-m02/id_rsa Username:docker}
	I0810 22:39:38.799758    6061 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0810 22:39:38.810077    6061 status.go:255] multinode-20210810223223-30291-m02 status: &{Name:multinode-20210810223223-30291-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0810 22:39:38.810118    6061 status.go:253] checking status of multinode-20210810223223-30291-m03 ...
	I0810 22:39:38.810609    6061 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:39:38.810661    6061 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:39:38.821948    6061 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43601
	I0810 22:39:38.822374    6061 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:39:38.822802    6061 main.go:130] libmachine: Using API Version  1
	I0810 22:39:38.822847    6061 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:39:38.823218    6061 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:39:38.823411    6061 main.go:130] libmachine: (multinode-20210810223223-30291-m03) Calling .GetState
	I0810 22:39:38.826376    6061 status.go:328] multinode-20210810223223-30291-m03 host status = "Stopped" (err=<nil>)
	I0810 22:39:38.826391    6061 status.go:341] host is not running, skipping remaining checks
	I0810 22:39:38.826398    6061 status.go:255] multinode-20210810223223-30291-m03 status: &{Name:multinode-20210810223223-30291-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.93s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (50.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:235: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210810223223-30291 node start m03 --alsologtostderr
E0810 22:40:03.474775   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222707-30291/client.crt: no such file or directory
multinode_test.go:235: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210810223223-30291 node start m03 --alsologtostderr: (49.660096118s)
multinode_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210810223223-30291 status
multinode_test.go:256: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (50.28s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (178.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:264: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20210810223223-30291
multinode_test.go:271: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-20210810223223-30291
multinode_test.go:271: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-20210810223223-30291: (7.159493943s)
multinode_test.go:276: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210810223223-30291 --wait=true -v=8 --alsologtostderr
E0810 22:41:13.065338   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810221736-30291/client.crt: no such file or directory
E0810 22:42:36.114618   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810221736-30291/client.crt: no such file or directory
multinode_test.go:276: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210810223223-30291 --wait=true -v=8 --alsologtostderr: (2m50.787955412s)
multinode_test.go:281: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20210810223223-30291
--- PASS: TestMultiNode/serial/RestartKeepsNodes (178.06s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:375: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210810223223-30291 node delete m03
multinode_test.go:375: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210810223223-30291 node delete m03: (1.335371669s)
multinode_test.go:381: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210810223223-30291 status --alsologtostderr
multinode_test.go:405: (dbg) Run:  kubectl get nodes
multinode_test.go:413: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.86s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (5.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210810223223-30291 stop
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210810223223-30291 stop: (5.127031103s)
multinode_test.go:301: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210810223223-30291 status
multinode_test.go:301: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210810223223-30291 status: exit status 7 (82.120305ms)

                                                
                                                
-- stdout --
	multinode-20210810223223-30291
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20210810223223-30291-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210810223223-30291 status --alsologtostderr
multinode_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210810223223-30291 status --alsologtostderr: exit status 7 (80.617449ms)

                                                
                                                
-- stdout --
	multinode-20210810223223-30291
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20210810223223-30291-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0810 22:43:34.290913    7233 out.go:298] Setting OutFile to fd 1 ...
	I0810 22:43:34.291026    7233 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:43:34.291035    7233 out.go:311] Setting ErrFile to fd 2...
	I0810 22:43:34.291038    7233 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:43:34.291172    7233 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin
	I0810 22:43:34.291375    7233 out.go:305] Setting JSON to false
	I0810 22:43:34.291396    7233 mustload.go:65] Loading cluster: multinode-20210810223223-30291
	I0810 22:43:34.291741    7233 status.go:253] checking status of multinode-20210810223223-30291 ...
	I0810 22:43:34.292160    7233 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:43:34.292216    7233 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:43:34.303545    7233 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43569
	I0810 22:43:34.303984    7233 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:43:34.304529    7233 main.go:130] libmachine: Using API Version  1
	I0810 22:43:34.304551    7233 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:43:34.304893    7233 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:43:34.305063    7233 main.go:130] libmachine: (multinode-20210810223223-30291) Calling .GetState
	I0810 22:43:34.307845    7233 status.go:328] multinode-20210810223223-30291 host status = "Stopped" (err=<nil>)
	I0810 22:43:34.307861    7233 status.go:341] host is not running, skipping remaining checks
	I0810 22:43:34.307867    7233 status.go:255] multinode-20210810223223-30291 status: &{Name:multinode-20210810223223-30291 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0810 22:43:34.307885    7233 status.go:253] checking status of multinode-20210810223223-30291-m02 ...
	I0810 22:43:34.308170    7233 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0810 22:43:34.308220    7233 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0810 22:43:34.318371    7233 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:38705
	I0810 22:43:34.318739    7233 main.go:130] libmachine: () Calling .GetVersion
	I0810 22:43:34.319165    7233 main.go:130] libmachine: Using API Version  1
	I0810 22:43:34.319194    7233 main.go:130] libmachine: () Calling .SetConfigRaw
	I0810 22:43:34.319491    7233 main.go:130] libmachine: () Calling .GetMachineName
	I0810 22:43:34.319663    7233 main.go:130] libmachine: (multinode-20210810223223-30291-m02) Calling .GetState
	I0810 22:43:34.322151    7233 status.go:328] multinode-20210810223223-30291-m02 host status = "Stopped" (err=<nil>)
	I0810 22:43:34.322165    7233 status.go:341] host is not running, skipping remaining checks
	I0810 22:43:34.322171    7233 status.go:255] multinode-20210810223223-30291-m02 status: &{Name:multinode-20210810223223-30291-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (5.29s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (117.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:335: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210810223223-30291 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0810 22:44:35.790523   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222707-30291/client.crt: no such file or directory
multinode_test.go:335: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210810223223-30291 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m56.755674586s)
multinode_test.go:341: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210810223223-30291 status --alsologtostderr
multinode_test.go:355: (dbg) Run:  kubectl get nodes
multinode_test.go:363: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (117.43s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (60.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:424: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20210810223223-30291
multinode_test.go:433: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210810223223-30291-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:433: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-20210810223223-30291-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (98.452024ms)

                                                
                                                
-- stdout --
	* [multinode-20210810223223-30291-m02] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	  - MINIKUBE_LOCATION=12230
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20210810223223-30291-m02' is duplicated with machine name 'multinode-20210810223223-30291-m02' in profile 'multinode-20210810223223-30291'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:441: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210810223223-30291-m03 --driver=kvm2  --container-runtime=crio
E0810 22:46:13.067122   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810221736-30291/client.crt: no such file or directory
multinode_test.go:441: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210810223223-30291-m03 --driver=kvm2  --container-runtime=crio: (59.005109457s)
multinode_test.go:448: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20210810223223-30291
multinode_test.go:448: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-20210810223223-30291: exit status 80 (232.730345ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20210810223223-30291
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20210810223223-30291-m03 already exists in multinode-20210810223223-30291-m03 profile
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	╭─────────────────────────────────────────────────────────────────────────────╮
	│                                                                             │
	│    * If the above advice does not help, please let us know:                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose               │
	│                                                                             │
	│    * Please attach the following file to the GitHub issue:                  │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:453: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-20210810223223-30291-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (60.37s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:sid/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:sid/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:sid/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver (11.07s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/KVM_Linux_crio_integration/out:/var/tmp debian:sid sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/KVM_Linux_crio_integration/out:/var/tmp debian:sid sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (11.073584892s)
--- PASS: TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver (11.07s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:latest/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:latest/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:latest/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver (9.8s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/KVM_Linux_crio_integration/out:/var/tmp debian:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/KVM_Linux_crio_integration/out:/var/tmp debian:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (9.801533023s)
--- PASS: TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver (9.80s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:10/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:10/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:10/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:10/kvm2-driver (9.94s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:10/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/KVM_Linux_crio_integration/out:/var/tmp debian:10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/KVM_Linux_crio_integration/out:/var/tmp debian:10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (9.941348903s)
--- PASS: TestDebPackageInstall/install_amd64_debian:10/kvm2-driver (9.94s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:9/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:9/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:9/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:9/kvm2-driver (8.14s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:9/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/KVM_Linux_crio_integration/out:/var/tmp debian:9 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/KVM_Linux_crio_integration/out:/var/tmp debian:9 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (8.140478218s)
--- PASS: TestDebPackageInstall/install_amd64_debian:9/kvm2-driver (8.14s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:latest/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:latest/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:latest/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver (16.88s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/KVM_Linux_crio_integration/out:/var/tmp ubuntu:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/KVM_Linux_crio_integration/out:/var/tmp ubuntu:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (16.877886695s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver (16.88s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver (16.5s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/KVM_Linux_crio_integration/out:/var/tmp ubuntu:20.10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/KVM_Linux_crio_integration/out:/var/tmp ubuntu:20.10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (16.495177302s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver (16.50s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver (16.87s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/KVM_Linux_crio_integration/out:/var/tmp ubuntu:20.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/KVM_Linux_crio_integration/out:/var/tmp ubuntu:20.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (16.867649614s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver (16.87s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver (15.35s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/KVM_Linux_crio_integration/out:/var/tmp ubuntu:18.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/KVM_Linux_crio_integration/out:/var/tmp ubuntu:18.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (15.353544779s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver (15.35s)

                                                
                                    
x
+
TestScheduledStopUnix (92.76s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-20210810225112-30291 --memory=2048 --driver=kvm2  --container-runtime=crio
E0810 22:51:13.065379   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810221736-30291/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-20210810225112-30291 --memory=2048 --driver=kvm2  --container-runtime=crio: (1m3.776864086s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210810225112-30291 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20210810225112-30291 -n scheduled-stop-20210810225112-30291
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210810225112-30291 --schedule 8s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210810225112-30291 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210810225112-30291 -n scheduled-stop-20210810225112-30291
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20210810225112-30291
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210810225112-30291 --schedule 5s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20210810225112-30291
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-20210810225112-30291: exit status 7 (67.195979ms)

                                                
                                                
-- stdout --
	scheduled-stop-20210810225112-30291
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210810225112-30291 -n scheduled-stop-20210810225112-30291
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210810225112-30291 -n scheduled-stop-20210810225112-30291: exit status 7 (67.709025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-20210810225112-30291" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-20210810225112-30291
--- PASS: TestScheduledStopUnix (92.76s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (278.63s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Run:  /tmp/minikube-v1.6.2.052041226.exe start -p running-upgrade-20210810225245-30291 --memory=2200 --vm-driver=kvm2  --container-runtime=crio

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Non-zero exit: /tmp/minikube-v1.6.2.052041226.exe start -p running-upgrade-20210810225245-30291 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: exit status 70 (3.141933005s)

                                                
                                                
-- stdout --
	! [running-upgrade-20210810225245-30291] minikube v1.6.2 on Debian 9.13
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	  - MINIKUBE_LOCATION=12230
	  - KUBECONFIG=/tmp/legacy_kubeconfig035186145
	* Selecting 'kvm2' driver from user configuration (alternates: [none])
	* Downloading VM boot image ...

                                                
                                                
-- /stdout --
** stderr ** 
	* minikube 1.22.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.22.0
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	
	! 'kvm2' driver reported an issue: /usr/bin/virsh domcapabilities --virttype kvm failed:
	error: failed to get emulator capabilities
	error: invalid argument: KVM is not supported by '/usr/bin/qemu-system-x86_64' on this host
	* Suggestion: Follow your Linux distribution instructions for configuring KVM
	* Documentation: https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/
	
	    > minikube-v1.6.0.iso.sha256: 65 B / 65 B [--------------] 100.00% ? p/s 0s    > minikube-v1.6.0.iso: 8.00 MiB / 150.93 MiB [>_____________] 5.30% ? p/s ?    > minikube-v1.6.0.iso: 48.19 MiB / 150.93 MiB [--->________] 31.93% ? p/s ?    > minikube-v1.6.0.iso: 85.41 MiB / 150.93 MiB [------>_____] 56.59% ? p/s ?    > minikube-v1.6.0.iso: 119.12 MiB / 150.93 MiB  78.92% 185.20 MiB p/s ETA 0    > minikube-v1.6.0.iso: 136.00 MiB / 150.93 MiB  90.11% 185.20 MiB p/s ETA 0    > minikube-v1.6.0.iso: 150.93 MiB / 150.93 MiB [] 100.00% 172.62 MiB p/s 2s* 
	X Failed to cache ISO: https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso: Failed to open file for checksum: open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/iso/minikube-v1.6.0.iso.download: no such file or directory
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:128: (dbg) Run:  /tmp/minikube-v1.6.2.052041226.exe start -p running-upgrade-20210810225245-30291 --memory=2200 --vm-driver=kvm2  --container-runtime=crio

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Done: /tmp/minikube-v1.6.2.052041226.exe start -p running-upgrade-20210810225245-30291 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m44.234707062s)
version_upgrade_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-20210810225245-30291 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0810 22:56:13.074913   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810221736-30291/client.crt: no such file or directory

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-20210810225245-30291 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m48.171045744s)
helpers_test.go:176: Cleaning up "running-upgrade-20210810225245-30291" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-20210810225245-30291
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-20210810225245-30291: (1.188282092s)
--- PASS: TestRunningBinaryUpgrade (278.63s)

                                                
                                    
x
+
TestKubernetesUpgrade (219.47s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:224: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210810225622-30291 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:224: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210810225622-30291 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m18.909846135s)
version_upgrade_test.go:229: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-20210810225622-30291
version_upgrade_test.go:229: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-20210810225622-30291: (2.128515871s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-20210810225622-30291 status --format={{.Host}}
version_upgrade_test.go:234: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-20210810225622-30291 status --format={{.Host}}: exit status 7 (76.000821ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:236: status error: exit status 7 (may be ok)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210810225622-30291 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:245: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210810225622-30291 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m46.569866591s)
version_upgrade_test.go:250: (dbg) Run:  kubectl --context kubernetes-upgrade-20210810225622-30291 version --output=json
version_upgrade_test.go:269: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:271: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210810225622-30291 --memory=2200 --kubernetes-version=v1.14.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:271: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210810225622-30291 --memory=2200 --kubernetes-version=v1.14.0 --driver=kvm2  --container-runtime=crio: exit status 106 (169.100316ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20210810225622-30291] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	  - MINIKUBE_LOCATION=12230
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.22.0-rc.0 cluster to v1.14.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.14.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20210810225622-30291
	    minikube start -p kubernetes-upgrade-20210810225622-30291 --kubernetes-version=v1.14.0
	    
	    2) Create a second cluster with Kubernetes 1.14.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20210810225622-302912 --kubernetes-version=v1.14.0
	    
	    3) Use the existing cluster at version Kubernetes 1.22.0-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20210810225622-30291 --kubernetes-version=v1.22.0-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:275: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:277: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210810225622-30291 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:277: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210810225622-30291 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (30.22345837s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-20210810225622-30291" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-20210810225622-30291
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-20210810225622-30291: (1.30612617s)
--- PASS: TestKubernetesUpgrade (219.47s)

                                                
                                    
x
+
TestPause/serial/Start (108.92s)

                                                
                                                
=== RUN   TestPause/serial/Start

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:77: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20210810225245-30291 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:77: (dbg) Done: out/minikube-linux-amd64 start -p pause-20210810225245-30291 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m48.922538748s)
--- PASS: TestPause/serial/Start (108.92s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (18.38s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:89: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20210810225245-30291 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0810 22:54:35.789702   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222707-30291/client.crt: no such file or directory
pause_test.go:89: (dbg) Done: out/minikube-linux-amd64 start -p pause-20210810225245-30291 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (18.352534278s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (18.38s)

                                                
                                    
x
+
TestPause/serial/Pause (1.97s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:107: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20210810225245-30291 --alsologtostderr -v=5
pause_test.go:107: (dbg) Done: out/minikube-linux-amd64 pause -p pause-20210810225245-30291 --alsologtostderr -v=5: (1.970308672s)
--- PASS: TestPause/serial/Pause (1.97s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.32s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-20210810225245-30291 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-20210810225245-30291 --output=json --layout=cluster: exit status 2 (316.992947ms)

                                                
                                                
-- stdout --
	{"Name":"pause-20210810225245-30291","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.22.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20210810225245-30291","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.32s)

                                                
                                    
x
+
TestPause/serial/Unpause (3.37s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:118: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-20210810225245-30291 --alsologtostderr -v=5
pause_test.go:118: (dbg) Done: out/minikube-linux-amd64 unpause -p pause-20210810225245-30291 --alsologtostderr -v=5: (3.3663819s)
--- PASS: TestPause/serial/Unpause (3.37s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (5.83s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:107: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20210810225245-30291 --alsologtostderr -v=5
pause_test.go:107: (dbg) Done: out/minikube-linux-amd64 pause -p pause-20210810225245-30291 --alsologtostderr -v=5: (5.825752527s)
--- PASS: TestPause/serial/PauseAgain (5.83s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.98s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:129: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-20210810225245-30291 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.98s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.23s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:139: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (0.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:213: (dbg) Run:  out/minikube-linux-amd64 start -p false-20210810225505-30291 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:213: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-20210810225505-30291 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (143.80064ms)

                                                
                                                
-- stdout --
	* [false-20210810225505-30291] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	  - MINIKUBE_LOCATION=12230
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0810 22:55:05.850194    2759 out.go:298] Setting OutFile to fd 1 ...
	I0810 22:55:05.850301    2759 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:55:05.850321    2759 out.go:311] Setting ErrFile to fd 2...
	I0810 22:55:05.850326    2759 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0810 22:55:05.850492    2759 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin
	I0810 22:55:05.850857    2759 out.go:305] Setting JSON to false
	I0810 22:55:05.889060    2759 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-3","uptime":9466,"bootTime":1628626640,"procs":189,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0810 22:55:05.889221    2759 start.go:121] virtualization: kvm guest
	I0810 22:55:05.892206    2759 out.go:177] * [false-20210810225505-30291] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0810 22:55:05.893817    2759 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0810 22:55:05.892370    2759 notify.go:169] Checking for updates...
	I0810 22:55:05.895398    2759 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0810 22:55:05.896984    2759 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	I0810 22:55:05.898497    2759 out.go:177]   - MINIKUBE_LOCATION=12230
	I0810 22:55:05.899333    2759 driver.go:335] Setting default libvirt URI to qemu:///system
	I0810 22:55:05.930590    2759 out.go:177] * Using the kvm2 driver based on user configuration
	I0810 22:55:05.930636    2759 start.go:278] selected driver: kvm2
	I0810 22:55:05.930644    2759 start.go:751] validating driver "kvm2" against <nil>
	I0810 22:55:05.930666    2759 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0810 22:55:05.933151    2759 out.go:177] 
	W0810 22:55:05.933271    2759 out.go:242] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0810 22:55:05.934723    2759 out.go:177] 

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "false-20210810225505-30291" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-20210810225505-30291
--- PASS: TestNetworkPlugins/group/false (0.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (87.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p auto-20210810225505-30291 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=kvm2  --container-runtime=crio

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p auto-20210810225505-30291 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=kvm2  --container-runtime=crio: (1m27.313161623s)
--- PASS: TestNetworkPlugins/group/auto/Start (87.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (120.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-20210810225505-30291 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=kvm2  --container-runtime=crio

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-20210810225505-30291 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=kvm2  --container-runtime=crio: (2m0.590694584s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (120.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-20210810225505-30291 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (14.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context auto-20210810225505-30291 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:340: "netcat-66fbc655d5-zb6kk" [07a9ffdc-204e-4244-986e-bb12b2d3c63b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:340: "netcat-66fbc655d5-zb6kk" [07a9ffdc-204e-4244-986e-bb12b2d3c63b] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.464985593s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (14.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:162: (dbg) Run:  kubectl --context auto-20210810225505-30291 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:181: (dbg) Run:  kubectl --context auto-20210810225505-30291 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:231: (dbg) Run:  kubectl --context auto-20210810225505-30291 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (195.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p cilium-20210810225506-30291 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=kvm2  --container-runtime=crio

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p cilium-20210810225506-30291 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=kvm2  --container-runtime=crio: (3m15.72896906s)
--- PASS: TestNetworkPlugins/group/cilium/Start (195.73s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.25s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:208: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-20210810225245-30291
version_upgrade_test.go:208: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-20210810225245-30291: (1.251715883s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:340: "kindnet-vhjmk" [773e9fff-cc4d-4141-87a4-3fe1e865c591] Running
net_test.go:106: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.030386351s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-20210810225505-30291 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context kindnet-20210810225505-30291 replace --force -f testdata/netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:145: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:340: "netcat-66fbc655d5-nnxkh" [18850553-dcbf-4613-8e41-e6e086c425a3] Pending
helpers_test.go:340: "netcat-66fbc655d5-nnxkh" [18850553-dcbf-4613-8e41-e6e086c425a3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0810 22:59:35.790581   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222707-30291/client.crt: no such file or directory
helpers_test.go:340: "netcat-66fbc655d5-nnxkh" [18850553-dcbf-4613-8e41-e6e086c425a3] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.023063547s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:162: (dbg) Run:  kubectl --context kindnet-20210810225505-30291 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:181: (dbg) Run:  kubectl --context kindnet-20210810225505-30291 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:231: (dbg) Run:  kubectl --context kindnet-20210810225505-30291 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/Start (101.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p custom-weave-20210810225506-30291 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=kvm2  --container-runtime=crio

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p custom-weave-20210810225506-30291 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=kvm2  --container-runtime=crio: (1m41.195181371s)
--- PASS: TestNetworkPlugins/group/custom-weave/Start (101.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (103.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-20210810225505-30291 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=kvm2  --container-runtime=crio

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-20210810225505-30291 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m43.893195519s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (103.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (93.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-20210810225505-30291 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=flannel --driver=kvm2  --container-runtime=crio
E0810 23:01:13.066075   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810221736-30291/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p flannel-20210810225505-30291 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m33.862682286s)
--- PASS: TestNetworkPlugins/group/flannel/Start (93.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-weave-20210810225506-30291 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-weave/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/NetCatPod (19.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context custom-weave-20210810225506-30291 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:340: "netcat-66fbc655d5-4ckw5" [4925f39e-6540-4f36-b19b-6bdce14c323f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/NetCatPod
helpers_test.go:340: "netcat-66fbc655d5-4ckw5" [4925f39e-6540-4f36-b19b-6bdce14c323f] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/NetCatPod
net_test.go:145: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: app=netcat healthy within 19.020044023s
--- PASS: TestNetworkPlugins/group/custom-weave/NetCatPod (19.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:340: "cilium-zlf72" [8e30097e-9904-4ce5-a65d-421976de263c] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.034528383s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-20210810225505-30291 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context enable-default-cni-20210810225505-30291 replace --force -f testdata/netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:145: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:340: "netcat-66fbc655d5-vqn9t" [b84e77e4-4bc8-4a05-910e-0f3cfdcebfd5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
helpers_test.go:340: "netcat-66fbc655d5-vqn9t" [b84e77e4-4bc8-4a05-910e-0f3cfdcebfd5] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:145: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.0103036s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p cilium-20210810225506-30291 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (15.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context cilium-20210810225506-30291 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:340: "netcat-66fbc655d5-ht8xp" [6baa6b21-cb2b-4598-b052-17a5412e4dfd] Pending
helpers_test.go:340: "netcat-66fbc655d5-ht8xp" [6baa6b21-cb2b-4598-b052-17a5412e4dfd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/NetCatPod
helpers_test.go:340: "netcat-66fbc655d5-ht8xp" [6baa6b21-cb2b-4598-b052-17a5412e4dfd] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:145: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 15.028957512s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (15.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (96.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-20210810225505-30291 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=kvm2  --container-runtime=crio

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p bridge-20210810225505-30291 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m36.433702949s)
--- PASS: TestNetworkPlugins/group/bridge/Start (96.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20210810225505-30291 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:181: (dbg) Run:  kubectl --context enable-default-cni-20210810225505-30291 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:231: (dbg) Run:  kubectl --context enable-default-cni-20210810225505-30291 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (192.81s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20210810230159-30291 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.14.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20210810230159-30291 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.14.0: (3m12.807564637s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (192.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:162: (dbg) Run:  kubectl --context cilium-20210810225506-30291 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:181: (dbg) Run:  kubectl --context cilium-20210810225506-30291 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:231: (dbg) Run:  kubectl --context cilium-20210810225506-30291 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (167.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20210810230207-30291 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.22.0-rc.0

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20210810230207-30291 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.22.0-rc.0: (2m47.849467711s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (167.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (8.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-system" ...
helpers_test.go:340: "kube-flannel-ds-amd64-2pkd4" [a1565c17-2792-471d-9bb3-1da43ae43e00] Pending: Initialized:ContainersNotInitialized (containers with incomplete status: [install-cni]) / Ready:ContainersNotReady (containers with unready status: [kube-flannel]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-flannel])
helpers_test.go:340: "kube-flannel-ds-amd64-2pkd4" [a1565c17-2792-471d-9bb3-1da43ae43e00] Pending / Ready:ContainersNotReady (containers with unready status: [kube-flannel]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-flannel])
helpers_test.go:340: "kube-flannel-ds-amd64-2pkd4" [a1565c17-2792-471d-9bb3-1da43ae43e00] Running
net_test.go:106: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 8.030982564s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (8.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-20210810225505-30291 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context flannel-20210810225505-30291 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:340: "netcat-66fbc655d5-blvkk" [35d460c5-9b96-4f8e-9dde-dcb61ee9d1ae] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:340: "netcat-66fbc655d5-blvkk" [35d460c5-9b96-4f8e-9dde-dcb61ee9d1ae] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.011386255s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:162: (dbg) Run:  kubectl --context flannel-20210810225505-30291 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:181: (dbg) Run:  kubectl --context flannel-20210810225505-30291 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:231: (dbg) Run:  kubectl --context flannel-20210810225505-30291 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20210810230304-30291 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.21.3
E0810 23:03:12.804715   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210810225505-30291/client.crt: no such file or directory
E0810 23:03:12.810077   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210810225505-30291/client.crt: no such file or directory
E0810 23:03:12.820348   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210810225505-30291/client.crt: no such file or directory
E0810 23:03:12.840705   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210810225505-30291/client.crt: no such file or directory
E0810 23:03:12.881112   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210810225505-30291/client.crt: no such file or directory
E0810 23:03:12.962310   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210810225505-30291/client.crt: no such file or directory
E0810 23:03:13.122778   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210810225505-30291/client.crt: no such file or directory
E0810 23:03:13.443417   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210810225505-30291/client.crt: no such file or directory
E0810 23:03:14.084479   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210810225505-30291/client.crt: no such file or directory
E0810 23:03:15.364727   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210810225505-30291/client.crt: no such file or directory
E0810 23:03:17.925541   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210810225505-30291/client.crt: no such file or directory
E0810 23:03:23.045711   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210810225505-30291/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20210810230304-30291 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.21.3: (1m37.003045531s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (97.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-20210810225505-30291 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (15.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context bridge-20210810225505-30291 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:340: "netcat-66fbc655d5-4l99m" [cf86c5ab-c96a-4ba8-aef9-ec14e60bd96f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0810 23:03:33.286375   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210810225505-30291/client.crt: no such file or directory
helpers_test.go:340: "netcat-66fbc655d5-4l99m" [cf86c5ab-c96a-4ba8-aef9-ec14e60bd96f] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 14.448259256s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (15.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (2.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:162: (dbg) Run:  kubectl --context bridge-20210810225505-30291 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Done: kubectl --context bridge-20210810225505-30291 exec deployment/netcat -- nslookup kubernetes.default: (2.51604967s)
--- PASS: TestNetworkPlugins/group/bridge/DNS (2.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:181: (dbg) Run:  kubectl --context bridge-20210810225505-30291 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:231: (dbg) Run:  kubectl --context bridge-20210810225505-30291 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.27s)
E0810 23:11:32.924831   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/custom-weave-20210810225506-30291/client.crt: no such file or directory
E0810 23:11:44.180665   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210810225506-30291/client.crt: no such file or directory
E0810 23:11:46.460411   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/enable-default-cni-20210810225505-30291/client.crt: no such file or directory
E0810 23:12:03.352786   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/custom-weave-20210810225506-30291/client.crt: no such file or directory
E0810 23:12:11.867337   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210810225506-30291/client.crt: no such file or directory
E0810 23:12:14.144851   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/enable-default-cni-20210810225505-30291/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (92.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20210810230349-30291 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.22.0-rc.0
E0810 23:03:53.767475   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210810225505-30291/client.crt: no such file or directory
E0810 23:04:24.513323   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/kindnet-20210810225505-30291/client.crt: no such file or directory
E0810 23:04:24.518687   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/kindnet-20210810225505-30291/client.crt: no such file or directory
E0810 23:04:24.529009   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/kindnet-20210810225505-30291/client.crt: no such file or directory
E0810 23:04:24.549318   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/kindnet-20210810225505-30291/client.crt: no such file or directory
E0810 23:04:24.589609   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/kindnet-20210810225505-30291/client.crt: no such file or directory
E0810 23:04:24.669968   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/kindnet-20210810225505-30291/client.crt: no such file or directory
E0810 23:04:24.830385   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/kindnet-20210810225505-30291/client.crt: no such file or directory
E0810 23:04:25.150987   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/kindnet-20210810225505-30291/client.crt: no such file or directory
E0810 23:04:25.791929   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/kindnet-20210810225505-30291/client.crt: no such file or directory
E0810 23:04:27.072594   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/kindnet-20210810225505-30291/client.crt: no such file or directory
E0810 23:04:29.715473   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/kindnet-20210810225505-30291/client.crt: no such file or directory
E0810 23:04:34.728020   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210810225505-30291/client.crt: no such file or directory
E0810 23:04:34.836290   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/kindnet-20210810225505-30291/client.crt: no such file or directory
E0810 23:04:35.789884   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222707-30291/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20210810230349-30291 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.22.0-rc.0: (1m32.515316819s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (92.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (11.76s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context default-k8s-different-port-20210810230304-30291 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:340: "busybox" [277cfc3f-7a11-4206-9652-bd93ee144dd6] Pending
helpers_test.go:340: "busybox" [277cfc3f-7a11-4206-9652-bd93ee144dd6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0810 23:04:45.076837   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/kindnet-20210810225505-30291/client.crt: no such file or directory
helpers_test.go:340: "busybox" [277cfc3f-7a11-4206-9652-bd93ee144dd6] Running
start_stop_delete_test.go:169: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 11.062920676s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context default-k8s-different-port-20210810230304-30291 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (11.76s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-different-port-20210810230304-30291 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:178: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-different-port-20210810230304-30291 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.04763842s)
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context default-k8s-different-port-20210810230304-30291 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (68.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-different-port-20210810230304-30291 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-different-port-20210810230304-30291 --alsologtostderr -v=3: (1m8.40293255s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (68.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.71s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context no-preload-20210810230207-30291 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:340: "busybox" [5571eb40-7173-49f2-924b-8fa0a8d77cc0] Pending
helpers_test.go:340: "busybox" [5571eb40-7173-49f2-924b-8fa0a8d77cc0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:340: "busybox" [5571eb40-7173-49f2-924b-8fa0a8d77cc0] Running
E0810 23:05:05.558024   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/kindnet-20210810225505-30291/client.crt: no such file or directory
start_stop_delete_test.go:169: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.082322011s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context no-preload-20210810230207-30291 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.71s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-20210810230207-30291 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:178: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-20210810230207-30291 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.03543565s)
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context no-preload-20210810230207-30291 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-20210810230207-30291 --alsologtostderr -v=3
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-20210810230207-30291 --alsologtostderr -v=3: (3.108834652s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210810230207-30291 -n no-preload-20210810230207-30291
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210810230207-30291 -n no-preload-20210810230207-30291: exit status 7 (69.278981ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-20210810230207-30291 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (348.64s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20210810230207-30291 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.22.0-rc.0

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20210810230207-30291 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.22.0-rc.0: (5m48.363636973s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210810230207-30291 -n no-preload-20210810230207-30291
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (348.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context old-k8s-version-20210810230159-30291 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:340: "busybox" [6adc5f12-fa2f-11eb-a2ed-525400ce8af1] Pending
helpers_test.go:340: "busybox" [6adc5f12-fa2f-11eb-a2ed-525400ce8af1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:340: "busybox" [6adc5f12-fa2f-11eb-a2ed-525400ce8af1] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:169: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.0420511s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context old-k8s-version-20210810230159-30291 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.66s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-20210810230349-30291 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:178: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-20210810230349-30291 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.82832331s)
start_stop_delete_test.go:184: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (4.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-20210810230349-30291 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-20210810230349-30291 --alsologtostderr -v=3: (4.127377776s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (4.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-20210810230159-30291 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:178: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-20210810230159-30291 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.030779803s)
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context old-k8s-version-20210810230159-30291 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-20210810230159-30291 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-20210810230159-30291 --alsologtostderr -v=3: (3.152291156s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210810230349-30291 -n newest-cni-20210810230349-30291
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210810230349-30291 -n newest-cni-20210810230349-30291: exit status 7 (83.20181ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-20210810230349-30291 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (86.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20210810230349-30291 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.22.0-rc.0

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20210810230349-30291 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.22.0-rc.0: (1m26.660061985s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210810230349-30291 -n newest-cni-20210810230349-30291
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (86.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210810230159-30291 -n old-k8s-version-20210810230159-30291
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210810230159-30291 -n old-k8s-version-20210810230159-30291: exit status 7 (75.664141ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-20210810230159-30291 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (462.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20210810230159-30291 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.14.0
E0810 23:05:46.519127   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/kindnet-20210810225505-30291/client.crt: no such file or directory
E0810 23:05:56.649222   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210810225505-30291/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20210810230159-30291 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.14.0: (7m42.645477849s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210810230159-30291 -n old-k8s-version-20210810230159-30291
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (462.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210810230304-30291 -n default-k8s-different-port-20210810230304-30291
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210810230304-30291 -n default-k8s-different-port-20210810230304-30291: exit status 7 (78.763108ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-different-port-20210810230304-30291 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (391.55s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20210810230304-30291 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.21.3
E0810 23:06:13.065352   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810221736-30291/client.crt: no such file or directory
E0810 23:06:32.924693   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/custom-weave-20210810225506-30291/client.crt: no such file or directory
E0810 23:06:32.929963   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/custom-weave-20210810225506-30291/client.crt: no such file or directory
E0810 23:06:32.940312   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/custom-weave-20210810225506-30291/client.crt: no such file or directory
E0810 23:06:32.961315   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/custom-weave-20210810225506-30291/client.crt: no such file or directory
E0810 23:06:33.001758   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/custom-weave-20210810225506-30291/client.crt: no such file or directory
E0810 23:06:33.082280   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/custom-weave-20210810225506-30291/client.crt: no such file or directory
E0810 23:06:33.242709   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/custom-weave-20210810225506-30291/client.crt: no such file or directory
E0810 23:06:33.562916   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/custom-weave-20210810225506-30291/client.crt: no such file or directory
E0810 23:06:34.203243   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/custom-weave-20210810225506-30291/client.crt: no such file or directory
E0810 23:06:35.483733   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/custom-weave-20210810225506-30291/client.crt: no such file or directory
E0810 23:06:38.044047   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/custom-weave-20210810225506-30291/client.crt: no such file or directory
E0810 23:06:44.180472   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210810225506-30291/client.crt: no such file or directory
E0810 23:06:44.185812   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210810225506-30291/client.crt: no such file or directory
E0810 23:06:44.196139   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210810225506-30291/client.crt: no such file or directory
E0810 23:06:44.216819   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210810225506-30291/client.crt: no such file or directory
E0810 23:06:44.257276   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210810225506-30291/client.crt: no such file or directory
E0810 23:06:44.338024   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210810225506-30291/client.crt: no such file or directory
E0810 23:06:44.498796   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210810225506-30291/client.crt: no such file or directory
E0810 23:06:44.819710   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210810225506-30291/client.crt: no such file or directory
E0810 23:06:45.460832   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210810225506-30291/client.crt: no such file or directory
E0810 23:06:45.910192   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/custom-weave-20210810225506-30291/client.crt: no such file or directory
E0810 23:06:46.460526   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/enable-default-cni-20210810225505-30291/client.crt: no such file or directory
E0810 23:06:46.465787   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/enable-default-cni-20210810225505-30291/client.crt: no such file or directory
E0810 23:06:46.476030   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/enable-default-cni-20210810225505-30291/client.crt: no such file or directory
E0810 23:06:46.496289   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/enable-default-cni-20210810225505-30291/client.crt: no such file or directory
E0810 23:06:46.536595   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/enable-default-cni-20210810225505-30291/client.crt: no such file or directory
E0810 23:06:46.617007   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/enable-default-cni-20210810225505-30291/client.crt: no such file or directory
E0810 23:06:46.741350   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210810225506-30291/client.crt: no such file or directory
E0810 23:06:46.777543   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/enable-default-cni-20210810225505-30291/client.crt: no such file or directory
E0810 23:06:47.098018   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/enable-default-cni-20210810225505-30291/client.crt: no such file or directory
E0810 23:06:47.738757   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/enable-default-cni-20210810225505-30291/client.crt: no such file or directory
E0810 23:06:49.019847   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/enable-default-cni-20210810225505-30291/client.crt: no such file or directory
E0810 23:06:49.302139   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210810225506-30291/client.crt: no such file or directory
E0810 23:06:51.580478   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/enable-default-cni-20210810225505-30291/client.crt: no such file or directory
E0810 23:06:54.423214   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210810225506-30291/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20210810230304-30291 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.21.3: (6m31.283143788s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210810230304-30291 -n default-k8s-different-port-20210810230304-30291
--- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (391.55s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:246: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:257: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-20210810230349-30291 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-20210810230349-30291 --alsologtostderr -v=1
E0810 23:06:56.150536   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/custom-weave-20210810225506-30291/client.crt: no such file or directory
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20210810230349-30291 -n newest-cni-20210810230349-30291
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20210810230349-30291 -n newest-cni-20210810230349-30291: exit status 2 (306.006415ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20210810230349-30291 -n newest-cni-20210810230349-30291
E0810 23:06:56.701330   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/enable-default-cni-20210810225505-30291/client.crt: no such file or directory
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20210810230349-30291 -n newest-cni-20210810230349-30291: exit status 2 (292.452687ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-20210810230349-30291 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20210810230349-30291 -n newest-cni-20210810230349-30291
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20210810230349-30291 -n newest-cni-20210810230349-30291
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (87s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20210810230700-30291 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.21.3
E0810 23:07:04.664334   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210810225506-30291/client.crt: no such file or directory
E0810 23:07:06.942525   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/enable-default-cni-20210810225505-30291/client.crt: no such file or directory
E0810 23:07:08.439423   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/kindnet-20210810225505-30291/client.crt: no such file or directory
E0810 23:07:16.631510   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/custom-weave-20210810225506-30291/client.crt: no such file or directory
E0810 23:07:25.145377   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210810225506-30291/client.crt: no such file or directory
E0810 23:07:27.422763   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/enable-default-cni-20210810225505-30291/client.crt: no such file or directory
E0810 23:07:38.837716   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222707-30291/client.crt: no such file or directory
E0810 23:07:40.144597   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/flannel-20210810225505-30291/client.crt: no such file or directory
E0810 23:07:40.149848   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/flannel-20210810225505-30291/client.crt: no such file or directory
E0810 23:07:40.160104   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/flannel-20210810225505-30291/client.crt: no such file or directory
E0810 23:07:40.180258   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/flannel-20210810225505-30291/client.crt: no such file or directory
E0810 23:07:40.220549   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/flannel-20210810225505-30291/client.crt: no such file or directory
E0810 23:07:40.300878   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/flannel-20210810225505-30291/client.crt: no such file or directory
E0810 23:07:40.461378   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/flannel-20210810225505-30291/client.crt: no such file or directory
E0810 23:07:40.781965   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/flannel-20210810225505-30291/client.crt: no such file or directory
E0810 23:07:41.423089   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/flannel-20210810225505-30291/client.crt: no such file or directory
E0810 23:07:42.703622   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/flannel-20210810225505-30291/client.crt: no such file or directory
E0810 23:07:45.263895   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/flannel-20210810225505-30291/client.crt: no such file or directory
E0810 23:07:50.384482   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/flannel-20210810225505-30291/client.crt: no such file or directory
E0810 23:07:57.591672   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/custom-weave-20210810225506-30291/client.crt: no such file or directory
E0810 23:08:00.624874   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/flannel-20210810225505-30291/client.crt: no such file or directory
E0810 23:08:06.106389   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210810225506-30291/client.crt: no such file or directory
E0810 23:08:08.383943   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/enable-default-cni-20210810225505-30291/client.crt: no such file or directory
E0810 23:08:12.804584   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210810225505-30291/client.crt: no such file or directory
E0810 23:08:21.106085   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/flannel-20210810225505-30291/client.crt: no such file or directory
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20210810230700-30291 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.21.3: (1m26.998078908s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (87.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (12.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context embed-certs-20210810230700-30291 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:340: "busybox" [f2e596d8-4bac-418a-a2b4-ec4144efebe7] Pending
helpers_test.go:340: "busybox" [f2e596d8-4bac-418a-a2b4-ec4144efebe7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0810 23:08:30.374147   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/bridge-20210810225505-30291/client.crt: no such file or directory
E0810 23:08:30.379458   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/bridge-20210810225505-30291/client.crt: no such file or directory
E0810 23:08:30.389752   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/bridge-20210810225505-30291/client.crt: no such file or directory
E0810 23:08:30.410055   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/bridge-20210810225505-30291/client.crt: no such file or directory
E0810 23:08:30.451120   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/bridge-20210810225505-30291/client.crt: no such file or directory
E0810 23:08:30.531518   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/bridge-20210810225505-30291/client.crt: no such file or directory
E0810 23:08:30.692580   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/bridge-20210810225505-30291/client.crt: no such file or directory
E0810 23:08:31.012777   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/bridge-20210810225505-30291/client.crt: no such file or directory
E0810 23:08:31.653737   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/bridge-20210810225505-30291/client.crt: no such file or directory
E0810 23:08:32.934453   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/bridge-20210810225505-30291/client.crt: no such file or directory
helpers_test.go:340: "busybox" [f2e596d8-4bac-418a-a2b4-ec4144efebe7] Running
E0810 23:08:35.494883   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/bridge-20210810225505-30291/client.crt: no such file or directory
start_stop_delete_test.go:169: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 12.024800196s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context embed-certs-20210810230700-30291 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (12.64s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-20210810230700-30291 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0810 23:08:40.489869   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210810225505-30291/client.crt: no such file or directory
E0810 23:08:40.615190   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/bridge-20210810225505-30291/client.crt: no such file or directory
start_stop_delete_test.go:178: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-20210810230700-30291 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.008898142s)
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context embed-certs-20210810230700-30291 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (63.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-20210810230700-30291 --alsologtostderr -v=3
E0810 23:08:50.855777   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/bridge-20210810225505-30291/client.crt: no such file or directory
E0810 23:09:02.066870   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/flannel-20210810225505-30291/client.crt: no such file or directory
E0810 23:09:11.337005   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/bridge-20210810225505-30291/client.crt: no such file or directory
E0810 23:09:19.512276   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/custom-weave-20210810225506-30291/client.crt: no such file or directory
E0810 23:09:24.513235   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/kindnet-20210810225505-30291/client.crt: no such file or directory
E0810 23:09:28.026654   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210810225506-30291/client.crt: no such file or directory
E0810 23:09:30.304616   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/enable-default-cni-20210810225505-30291/client.crt: no such file or directory
E0810 23:09:35.790053   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210810222707-30291/client.crt: no such file or directory
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-20210810230700-30291 --alsologtostderr -v=3: (1m3.428832143s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (63.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210810230700-30291 -n embed-certs-20210810230700-30291
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210810230700-30291 -n embed-certs-20210810230700-30291: exit status 7 (73.647515ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-20210810230700-30291 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (380.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20210810230700-30291 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.21.3
E0810 23:09:52.279832   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/kindnet-20210810225505-30291/client.crt: no such file or directory
E0810 23:09:52.297989   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/bridge-20210810225505-30291/client.crt: no such file or directory
E0810 23:10:23.987275   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/flannel-20210810225505-30291/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20210810230700-30291 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.21.3: (6m19.7442308s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210810230700-30291 -n embed-certs-20210810230700-30291
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (380.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:340: "kubernetes-dashboard-6fcdf4f6d-w55bx" [1557012b-f3c0-4b66-9a82-4a17494686ef] Running
start_stop_delete_test.go:247: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.018443974s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:340: "kubernetes-dashboard-6fcdf4f6d-w55bx" [1557012b-f3c0-4b66-9a82-4a17494686ef] Running
start_stop_delete_test.go:260: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012445788s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context no-preload-20210810230207-30291 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-20210810230207-30291 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-20210810230207-30291 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20210810230207-30291 -n no-preload-20210810230207-30291
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20210810230207-30291 -n no-preload-20210810230207-30291: exit status 2 (251.450773ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20210810230207-30291 -n no-preload-20210810230207-30291
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20210810230207-30291 -n no-preload-20210810230207-30291: exit status 2 (252.481714ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-20210810230207-30291 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20210810230207-30291 -n no-preload-20210810230207-30291
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20210810230207-30291 -n no-preload-20210810230207-30291
E0810 23:11:13.065335   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810221736-30291/client.crt: no such file or directory
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:340: "kubernetes-dashboard-6fcdf4f6d-68nxh" [c2bed627-fa30-4d94-ba93-fc100b7021f6] Running
start_stop_delete_test.go:247: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.017742377s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:340: "kubernetes-dashboard-6fcdf4f6d-68nxh" [c2bed627-fa30-4d94-ba93-fc100b7021f6] Running
E0810 23:12:40.144756   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/flannel-20210810225505-30291/client.crt: no such file or directory
start_stop_delete_test.go:260: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012423941s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context default-k8s-different-port-20210810230304-30291 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-different-port-20210810230304-30291 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:277: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (2.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-different-port-20210810230304-30291 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20210810230304-30291 -n default-k8s-different-port-20210810230304-30291
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20210810230304-30291 -n default-k8s-different-port-20210810230304-30291: exit status 2 (232.405964ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20210810230304-30291 -n default-k8s-different-port-20210810230304-30291
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20210810230304-30291 -n default-k8s-different-port-20210810230304-30291: exit status 2 (239.857133ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-different-port-20210810230304-30291 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20210810230304-30291 -n default-k8s-different-port-20210810230304-30291
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20210810230304-30291 -n default-k8s-different-port-20210810230304-30291
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Pause (2.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:340: "kubernetes-dashboard-5d8978d65d-fdtx4" [6543054c-fa30-11eb-bf69-525400ce8af1] Running
E0810 23:13:12.804150   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210810225505-30291/client.crt: no such file or directory
start_stop_delete_test.go:247: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.014581126s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:340: "kubernetes-dashboard-5d8978d65d-fdtx4" [6543054c-fa30-11eb-bf69-525400ce8af1] Running
start_stop_delete_test.go:260: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008144996s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context old-k8s-version-20210810230159-30291 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-20210810230159-30291 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:277: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-20210810230159-30291 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20210810230159-30291 -n old-k8s-version-20210810230159-30291
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20210810230159-30291 -n old-k8s-version-20210810230159-30291: exit status 2 (233.965756ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20210810230159-30291 -n old-k8s-version-20210810230159-30291
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20210810230159-30291 -n old-k8s-version-20210810230159-30291: exit status 2 (239.552788ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-20210810230159-30291 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20210810230159-30291 -n old-k8s-version-20210810230159-30291
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20210810230159-30291 -n old-k8s-version-20210810230159-30291
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.70s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:340: "kubernetes-dashboard-6fcdf4f6d-2xjnj" [e3c85e90-7e38-4e3b-8657-bf987f9e7442] Running
start_stop_delete_test.go:247: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013866899s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:340: "kubernetes-dashboard-6fcdf4f6d-2xjnj" [e3c85e90-7e38-4e3b-8657-bf987f9e7442] Running
E0810 23:16:13.065693   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210810221736-30291/client.crt: no such file or directory
start_stop_delete_test.go:260: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007761055s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context embed-certs-20210810230700-30291 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-20210810230700-30291 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:277: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-20210810230700-30291 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20210810230700-30291 -n embed-certs-20210810230700-30291
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20210810230700-30291 -n embed-certs-20210810230700-30291: exit status 2 (244.50062ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20210810230700-30291 -n embed-certs-20210810230700-30291
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20210810230700-30291 -n embed-certs-20210810230700-30291: exit status 2 (243.063777ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-20210810230700-30291 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20210810230700-30291 -n embed-certs-20210810230700-30291
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20210810230700-30291 -n embed-certs-20210810230700-30291
E0810 23:16:17.489983   30291 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-26885-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/no-preload-20210810230207-30291/client.crt: no such file or directory
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.64s)

                                                
                                    

Test skip (28/263)

x
+
TestDownloadOnly/v1.14.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.14.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.14.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/kubectl
aaa_download_only_test.go:154: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.14.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.21.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.21.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/kubectl
aaa_download_only_test.go:154: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.21.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.22.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.22.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/kubectl
aaa_download_only_test.go:154: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.22.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:212: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:35: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:115: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:188: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:467: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:527: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:39: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:43: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:43: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:286: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:88: Skipping the test as crio container runtimes requires CNI
helpers_test.go:176: Cleaning up "kubenet-20210810225505-30291" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-20210810225505-30291
--- SKIP: TestNetworkPlugins/group/kubenet (0.26s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:91: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-20210810230304-30291" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-20210810230304-30291
--- SKIP: TestStartStop/group/disable-driver-mounts (0.35s)

                                                
                                    
Copied to clipboard