Test Report: KVM_Linux_crio 12230

                    
                      1c76ff5cea01605c2d985c010644edf1e689d34b:2021-08-13:19970
                    
                

Test fail (5/263)

Order failed test Duration
30 TestAddons/parallel/Ingress 255.94
152 TestMultiNode/serial/DeployApp2Nodes 33.63
153 TestMultiNode/serial/PingHostFrom2Pods 13.18
192 TestPreload 166.09
289 TestNetworkPlugins/group/calico/Start 553.7
x
+
TestAddons/parallel/Ingress (255.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:158: (dbg) TestAddons/parallel/Ingress: waiting 12m0s for pods matching "app.kubernetes.io/name=ingress-nginx" in namespace "ingress-nginx" ...
helpers_test.go:343: "ingress-nginx-admission-create-jswnd" [35929738-f6a6-4263-91dd-5e83b165fc66] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:158: (dbg) TestAddons/parallel/Ingress: app.kubernetes.io/name=ingress-nginx healthy within 5.088813ms
addons_test.go:165: (dbg) Run:  kubectl --context addons-20210812235029-820289 replace --force -f testdata/nginx-ingv1.yaml
addons_test.go:180: (dbg) Run:  kubectl --context addons-20210812235029-820289 replace --force -f testdata/nginx-pod-svc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:185: (dbg) TestAddons/parallel/Ingress: waiting 4m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:343: "nginx" [7419f352-61f4-4cd1-b1a1-5f2622b3f293] Pending
helpers_test.go:343: "nginx" [7419f352-61f4-4cd1-b1a1-5f2622b3f293] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:343: "nginx" [7419f352-61f4-4cd1-b1a1-5f2622b3f293] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:185: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 29.031060625s
addons_test.go:204: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210812235029-820289 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:204: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-20210812235029-820289 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (31.921678218s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:204: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210812235029-820289 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:204: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-20210812235029-820289 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (32.156209102s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:204: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210812235029-820289 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:204: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-20210812235029-820289 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (31.982155585s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:224: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:165: (dbg) Run:  kubectl --context addons-20210812235029-820289 replace --force -f testdata/nginx-ingv1.yaml
addons_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210812235029-820289 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:242: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-20210812235029-820289 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (31.877474739s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210812235029-820289 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:242: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-20210812235029-820289 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (32.238183452s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210812235029-820289 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:242: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-20210812235029-820289 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (31.784556947s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:262: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:265: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210812235029-820289 addons disable ingress --alsologtostderr -v=1
addons_test.go:265: (dbg) Done: out/minikube-linux-amd64 -p addons-20210812235029-820289 addons disable ingress --alsologtostderr -v=1: (29.275049951s)
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-20210812235029-820289 -n addons-20210812235029-820289
helpers_test.go:245: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210812235029-820289 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p addons-20210812235029-820289 logs -n 25: (1.526573213s)
helpers_test.go:253: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-------------------------------------|-------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                Args                 |               Profile               |  User   | Version |          Start Time           |           End Time            |
	|---------|-------------------------------------|-------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | --all                               | download-only-20210812235004-820289 | jenkins | v1.22.0 | Thu, 12 Aug 2021 23:50:28 UTC | Thu, 12 Aug 2021 23:50:28 UTC |
	| delete  | -p                                  | download-only-20210812235004-820289 | jenkins | v1.22.0 | Thu, 12 Aug 2021 23:50:28 UTC | Thu, 12 Aug 2021 23:50:29 UTC |
	|         | download-only-20210812235004-820289 |                                     |         |         |                               |                               |
	| delete  | -p                                  | download-only-20210812235004-820289 | jenkins | v1.22.0 | Thu, 12 Aug 2021 23:50:29 UTC | Thu, 12 Aug 2021 23:50:29 UTC |
	|         | download-only-20210812235004-820289 |                                     |         |         |                               |                               |
	| start   | -p                                  | addons-20210812235029-820289        | jenkins | v1.22.0 | Thu, 12 Aug 2021 23:50:29 UTC | Thu, 12 Aug 2021 23:53:22 UTC |
	|         | addons-20210812235029-820289        |                                     |         |         |                               |                               |
	|         | --wait=true --memory=4000           |                                     |         |         |                               |                               |
	|         | --alsologtostderr                   |                                     |         |         |                               |                               |
	|         | --addons=registry                   |                                     |         |         |                               |                               |
	|         | --addons=metrics-server             |                                     |         |         |                               |                               |
	|         | --addons=olm                        |                                     |         |         |                               |                               |
	|         | --addons=volumesnapshots            |                                     |         |         |                               |                               |
	|         | --addons=csi-hostpath-driver        |                                     |         |         |                               |                               |
	|         | --driver=kvm2                       |                                     |         |         |                               |                               |
	|         | --container-runtime=crio            |                                     |         |         |                               |                               |
	|         | --addons=ingress                    |                                     |         |         |                               |                               |
	|         | --addons=helm-tiller                |                                     |         |         |                               |                               |
	| -p      | addons-20210812235029-820289        | addons-20210812235029-820289        | jenkins | v1.22.0 | Thu, 12 Aug 2021 23:53:35 UTC | Thu, 12 Aug 2021 23:53:50 UTC |
	|         | addons enable gcp-auth --force      |                                     |         |         |                               |                               |
	| -p      | addons-20210812235029-820289        | addons-20210812235029-820289        | jenkins | v1.22.0 | Thu, 12 Aug 2021 23:53:55 UTC | Thu, 12 Aug 2021 23:53:56 UTC |
	|         | addons disable metrics-server       |                                     |         |         |                               |                               |
	|         | --alsologtostderr -v=1              |                                     |         |         |                               |                               |
	| -p      | addons-20210812235029-820289        | addons-20210812235029-820289        | jenkins | v1.22.0 | Thu, 12 Aug 2021 23:54:04 UTC | Thu, 12 Aug 2021 23:54:04 UTC |
	|         | ip                                  |                                     |         |         |                               |                               |
	| -p      | addons-20210812235029-820289        | addons-20210812235029-820289        | jenkins | v1.22.0 | Thu, 12 Aug 2021 23:54:04 UTC | Thu, 12 Aug 2021 23:54:05 UTC |
	|         | addons disable registry             |                                     |         |         |                               |                               |
	|         | --alsologtostderr -v=1              |                                     |         |         |                               |                               |
	| -p      | addons-20210812235029-820289        | addons-20210812235029-820289        | jenkins | v1.22.0 | Thu, 12 Aug 2021 23:54:10 UTC | Thu, 12 Aug 2021 23:54:11 UTC |
	|         | addons disable helm-tiller          |                                     |         |         |                               |                               |
	|         | --alsologtostderr -v=1              |                                     |         |         |                               |                               |
	| -p      | addons-20210812235029-820289        | addons-20210812235029-820289        | jenkins | v1.22.0 | Thu, 12 Aug 2021 23:54:54 UTC | Thu, 12 Aug 2021 23:55:01 UTC |
	|         | addons disable gcp-auth             |                                     |         |         |                               |                               |
	|         | --alsologtostderr -v=1              |                                     |         |         |                               |                               |
	| -p      | addons-20210812235029-820289        | addons-20210812235029-820289        | jenkins | v1.22.0 | Thu, 12 Aug 2021 23:55:12 UTC | Thu, 12 Aug 2021 23:55:19 UTC |
	|         | addons disable                      |                                     |         |         |                               |                               |
	|         | csi-hostpath-driver                 |                                     |         |         |                               |                               |
	|         | --alsologtostderr -v=1              |                                     |         |         |                               |                               |
	| -p      | addons-20210812235029-820289        | addons-20210812235029-820289        | jenkins | v1.22.0 | Thu, 12 Aug 2021 23:55:19 UTC | Thu, 12 Aug 2021 23:55:20 UTC |
	|         | addons disable volumesnapshots      |                                     |         |         |                               |                               |
	|         | --alsologtostderr -v=1              |                                     |         |         |                               |                               |
	| -p      | addons-20210812235029-820289        | addons-20210812235029-820289        | jenkins | v1.22.0 | Thu, 12 Aug 2021 23:57:49 UTC | Thu, 12 Aug 2021 23:58:19 UTC |
	|         | addons disable ingress              |                                     |         |         |                               |                               |
	|         | --alsologtostderr -v=1              |                                     |         |         |                               |                               |
	|---------|-------------------------------------|-------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/12 23:50:29
	Running on machine: debian-jenkins-agent-14
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0812 23:50:29.319392  820640 out.go:298] Setting OutFile to fd 1 ...
	I0812 23:50:29.319475  820640 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0812 23:50:29.319480  820640 out.go:311] Setting ErrFile to fd 2...
	I0812 23:50:29.319485  820640 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0812 23:50:29.319592  820640 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	I0812 23:50:29.319893  820640 out.go:305] Setting JSON to false
	I0812 23:50:29.354403  820640 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":12792,"bootTime":1628799437,"procs":156,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0812 23:50:29.354489  820640 start.go:121] virtualization: kvm guest
	I0812 23:50:29.357033  820640 out.go:177] * [addons-20210812235029-820289] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0812 23:50:29.358811  820640 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0812 23:50:29.357174  820640 notify.go:169] Checking for updates...
	I0812 23:50:29.360402  820640 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0812 23:50:29.361859  820640 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	I0812 23:50:29.363254  820640 out.go:177]   - MINIKUBE_LOCATION=12230
	I0812 23:50:29.363426  820640 driver.go:335] Setting default libvirt URI to qemu:///system
	I0812 23:50:29.391536  820640 out.go:177] * Using the kvm2 driver based on user configuration
	I0812 23:50:29.391559  820640 start.go:278] selected driver: kvm2
	I0812 23:50:29.391565  820640 start.go:751] validating driver "kvm2" against <nil>
	I0812 23:50:29.391591  820640 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0812 23:50:29.392557  820640 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 23:50:29.392682  820640 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0812 23:50:29.402874  820640 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0812 23:50:29.402922  820640 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0812 23:50:29.403062  820640 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 23:50:29.403084  820640 cni.go:93] Creating CNI manager for ""
	I0812 23:50:29.403090  820640 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0812 23:50:29.403096  820640 start_flags.go:272] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0812 23:50:29.403104  820640 start_flags.go:277] config:
	{Name:addons-20210812235029-820289 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:addons-20210812235029-820289 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Netw
orkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0812 23:50:29.403242  820640 iso.go:123] acquiring lock: {Name:mk52748db467e5aa4b344902ee09c9ea40467a67 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 23:50:29.404961  820640 out.go:177] * Starting control plane node addons-20210812235029-820289 in cluster addons-20210812235029-820289
	I0812 23:50:29.404990  820640 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0812 23:50:29.405017  820640 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4
	I0812 23:50:29.405042  820640 cache.go:56] Caching tarball of preloaded images
	I0812 23:50:29.405147  820640 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0812 23:50:29.405166  820640 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on crio
	I0812 23:50:29.405388  820640 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235029-820289/config.json ...
	I0812 23:50:29.405410  820640 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235029-820289/config.json: {Name:mka81f27678f8db57cbfff70e28ce28f38a8ce33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 23:50:29.405535  820640 cache.go:205] Successfully downloaded all kic artifacts
	I0812 23:50:29.405559  820640 start.go:313] acquiring machines lock for addons-20210812235029-820289: {Name:mk2d46e46728943fc604570595bb7616469b4e8e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 23:50:29.405608  820640 start.go:317] acquired machines lock for "addons-20210812235029-820289" in 36.649µs
	I0812 23:50:29.405626  820640 start.go:89] Provisioning new machine with config: &{Name:addons-20210812235029-820289 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12122/minikube-v1.22.0-1628238775-12122.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 Clu
sterName:addons-20210812235029-820289 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0812 23:50:29.405697  820640 start.go:126] createHost starting for "" (driver="kvm2")
	I0812 23:50:29.407488  820640 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0812 23:50:29.407595  820640 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 23:50:29.407667  820640 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0812 23:50:29.417049  820640 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33759
	I0812 23:50:29.417468  820640 main.go:130] libmachine: () Calling .GetVersion
	I0812 23:50:29.418043  820640 main.go:130] libmachine: Using API Version  1
	I0812 23:50:29.418068  820640 main.go:130] libmachine: () Calling .SetConfigRaw
	I0812 23:50:29.418409  820640 main.go:130] libmachine: () Calling .GetMachineName
	I0812 23:50:29.418578  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetMachineName
	I0812 23:50:29.418690  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .DriverName
	I0812 23:50:29.418817  820640 start.go:160] libmachine.API.Create for "addons-20210812235029-820289" (driver="kvm2")
	I0812 23:50:29.418850  820640 client.go:168] LocalClient.Create starting
	I0812 23:50:29.418907  820640 main.go:130] libmachine: Creating CA: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem
	I0812 23:50:29.657223  820640 main.go:130] libmachine: Creating client certificate: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem
	I0812 23:50:29.862852  820640 main.go:130] libmachine: Running pre-create checks...
	I0812 23:50:29.862876  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .PreCreateCheck
	I0812 23:50:29.863321  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetConfigRaw
	I0812 23:50:29.863820  820640 main.go:130] libmachine: Creating machine...
	I0812 23:50:29.863843  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .Create
	I0812 23:50:29.863973  820640 main.go:130] libmachine: (addons-20210812235029-820289) Creating KVM machine...
	I0812 23:50:29.866643  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | found existing default KVM network
	I0812 23:50:29.867778  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | I0812 23:50:29.867572  820664 network.go:288] reserving subnet 192.168.39.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.39.0:0xc00009e5e0] misses:0}
	I0812 23:50:29.867817  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | I0812 23:50:29.867676  820664 network.go:235] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0812 23:50:29.902336  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | trying to create private KVM network mk-addons-20210812235029-820289 192.168.39.0/24...
	I0812 23:50:30.112123  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | private KVM network mk-addons-20210812235029-820289 192.168.39.0/24 created
	I0812 23:50:30.112151  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | I0812 23:50:30.112090  820664 common.go:108] Making disk image using store path: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	I0812 23:50:30.112166  820640 main.go:130] libmachine: (addons-20210812235029-820289) Setting up store path in /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812235029-820289 ...
	I0812 23:50:30.112184  820640 main.go:130] libmachine: (addons-20210812235029-820289) Building disk image from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/iso/minikube-v1.22.0-1628238775-12122.iso
	I0812 23:50:30.112369  820640 main.go:130] libmachine: (addons-20210812235029-820289) Downloading /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/iso/minikube-v1.22.0-1628238775-12122.iso...
	I0812 23:50:30.303704  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | I0812 23:50:30.303560  820664 common.go:115] Creating ssh key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812235029-820289/id_rsa...
	I0812 23:50:30.505024  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | I0812 23:50:30.504906  820664 common.go:121] Creating raw disk image: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812235029-820289/addons-20210812235029-820289.rawdisk...
	I0812 23:50:30.505055  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | Writing magic tar header
	I0812 23:50:30.505081  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | Writing SSH key tar header
	I0812 23:50:30.505092  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | I0812 23:50:30.505057  820664 common.go:135] Fixing permissions on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812235029-820289 ...
	I0812 23:50:30.505218  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812235029-820289
	I0812 23:50:30.505266  820640 main.go:130] libmachine: (addons-20210812235029-820289) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812235029-820289 (perms=drwx------)
	I0812 23:50:30.505283  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines
	I0812 23:50:30.505303  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	I0812 23:50:30.505316  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b
	I0812 23:50:30.505334  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0812 23:50:30.505354  820640 main.go:130] libmachine: (addons-20210812235029-820289) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines (perms=drwxr-xr-x)
	I0812 23:50:30.505368  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | Checking permissions on dir: /home/jenkins
	I0812 23:50:30.505379  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | Checking permissions on dir: /home
	I0812 23:50:30.505392  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | Skipping /home - not owner
	I0812 23:50:30.505411  820640 main.go:130] libmachine: (addons-20210812235029-820289) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube (perms=drwxr-xr-x)
	I0812 23:50:30.505444  820640 main.go:130] libmachine: (addons-20210812235029-820289) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b (perms=drwxr-xr-x)
	I0812 23:50:30.505461  820640 main.go:130] libmachine: (addons-20210812235029-820289) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxr-xr-x)
	I0812 23:50:30.505474  820640 main.go:130] libmachine: (addons-20210812235029-820289) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0812 23:50:30.505486  820640 main.go:130] libmachine: (addons-20210812235029-820289) Creating domain...
	I0812 23:50:30.529583  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined MAC address 52:54:00:e8:20:83 in network default
	I0812 23:50:30.530133  820640 main.go:130] libmachine: (addons-20210812235029-820289) Ensuring networks are active...
	I0812 23:50:30.530172  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:50:30.532043  820640 main.go:130] libmachine: (addons-20210812235029-820289) Ensuring network default is active
	I0812 23:50:30.532388  820640 main.go:130] libmachine: (addons-20210812235029-820289) Ensuring network mk-addons-20210812235029-820289 is active
	I0812 23:50:30.532837  820640 main.go:130] libmachine: (addons-20210812235029-820289) Getting domain xml...
	I0812 23:50:30.534692  820640 main.go:130] libmachine: (addons-20210812235029-820289) Creating domain...
	I0812 23:50:30.885938  820640 main.go:130] libmachine: (addons-20210812235029-820289) Waiting to get IP...
	I0812 23:50:30.886691  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:50:30.887138  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | unable to find current IP address of domain addons-20210812235029-820289 in network mk-addons-20210812235029-820289
	I0812 23:50:30.887176  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | I0812 23:50:30.887127  820664 retry.go:31] will retry after 263.082536ms: waiting for machine to come up
	I0812 23:50:31.151476  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:50:31.152094  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | unable to find current IP address of domain addons-20210812235029-820289 in network mk-addons-20210812235029-820289
	I0812 23:50:31.152127  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | I0812 23:50:31.152037  820664 retry.go:31] will retry after 381.329545ms: waiting for machine to come up
	I0812 23:50:31.534469  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:50:31.534901  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | unable to find current IP address of domain addons-20210812235029-820289 in network mk-addons-20210812235029-820289
	I0812 23:50:31.534932  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | I0812 23:50:31.534861  820664 retry.go:31] will retry after 422.765636ms: waiting for machine to come up
	I0812 23:50:31.959333  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:50:31.959776  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | unable to find current IP address of domain addons-20210812235029-820289 in network mk-addons-20210812235029-820289
	I0812 23:50:31.959800  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | I0812 23:50:31.959728  820664 retry.go:31] will retry after 473.074753ms: waiting for machine to come up
	I0812 23:50:32.434295  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:50:32.434745  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | unable to find current IP address of domain addons-20210812235029-820289 in network mk-addons-20210812235029-820289
	I0812 23:50:32.434788  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | I0812 23:50:32.434668  820664 retry.go:31] will retry after 587.352751ms: waiting for machine to come up
	I0812 23:50:33.023431  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:50:33.023898  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | unable to find current IP address of domain addons-20210812235029-820289 in network mk-addons-20210812235029-820289
	I0812 23:50:33.023933  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | I0812 23:50:33.023839  820664 retry.go:31] will retry after 834.206799ms: waiting for machine to come up
	I0812 23:50:33.859684  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:50:33.860174  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | unable to find current IP address of domain addons-20210812235029-820289 in network mk-addons-20210812235029-820289
	I0812 23:50:33.860205  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | I0812 23:50:33.860103  820664 retry.go:31] will retry after 746.553905ms: waiting for machine to come up
	I0812 23:50:34.608402  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:50:34.608902  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | unable to find current IP address of domain addons-20210812235029-820289 in network mk-addons-20210812235029-820289
	I0812 23:50:34.608928  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | I0812 23:50:34.608843  820664 retry.go:31] will retry after 987.362415ms: waiting for machine to come up
	I0812 23:50:35.597254  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:50:35.597696  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | unable to find current IP address of domain addons-20210812235029-820289 in network mk-addons-20210812235029-820289
	I0812 23:50:35.597786  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | I0812 23:50:35.597639  820664 retry.go:31] will retry after 1.189835008s: waiting for machine to come up
	I0812 23:50:36.788782  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:50:36.789213  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | unable to find current IP address of domain addons-20210812235029-820289 in network mk-addons-20210812235029-820289
	I0812 23:50:36.789245  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | I0812 23:50:36.789157  820664 retry.go:31] will retry after 1.677229867s: waiting for machine to come up
	I0812 23:50:38.469046  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:50:38.469566  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | unable to find current IP address of domain addons-20210812235029-820289 in network mk-addons-20210812235029-820289
	I0812 23:50:38.469601  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | I0812 23:50:38.469494  820664 retry.go:31] will retry after 2.346016261s: waiting for machine to come up
	I0812 23:50:40.817651  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:50:40.818077  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | unable to find current IP address of domain addons-20210812235029-820289 in network mk-addons-20210812235029-820289
	I0812 23:50:40.818112  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | I0812 23:50:40.818023  820664 retry.go:31] will retry after 3.36678925s: waiting for machine to come up
	I0812 23:50:44.188501  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:50:44.188989  820640 main.go:130] libmachine: (addons-20210812235029-820289) Found IP for machine: 192.168.39.112
	I0812 23:50:44.189010  820640 main.go:130] libmachine: (addons-20210812235029-820289) Reserving static IP address...
	I0812 23:50:44.189026  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has current primary IP address 192.168.39.112 and MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:50:44.189434  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | unable to find host DHCP lease matching {name: "addons-20210812235029-820289", mac: "52:54:00:33:6c:ed", ip: "192.168.39.112"} in network mk-addons-20210812235029-820289
	I0812 23:50:44.237434  820640 main.go:130] libmachine: (addons-20210812235029-820289) Reserved static IP address: 192.168.39.112
	I0812 23:50:44.237489  820640 main.go:130] libmachine: (addons-20210812235029-820289) Waiting for SSH to be available...
	I0812 23:50:44.237502  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | Getting to WaitForSSH function...
	I0812 23:50:44.243212  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:50:44.243555  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6c:ed", ip: ""} in network mk-addons-20210812235029-820289: {Iface:virbr1 ExpiryTime:2021-08-13 00:50:44 +0000 UTC Type:0 Mac:52:54:00:33:6c:ed Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:minikube Clientid:01:52:54:00:33:6c:ed}
	I0812 23:50:44.243589  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined IP address 192.168.39.112 and MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:50:44.243800  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | Using SSH client type: external
	I0812 23:50:44.243839  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812235029-820289/id_rsa (-rw-------)
	I0812 23:50:44.243895  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.112 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812235029-820289/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0812 23:50:44.243956  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | About to run SSH command:
	I0812 23:50:44.243970  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | exit 0
	I0812 23:50:44.399207  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | SSH cmd err, output: <nil>: 
	I0812 23:50:44.399594  820640 main.go:130] libmachine: (addons-20210812235029-820289) KVM machine creation complete!
	I0812 23:50:44.399672  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetConfigRaw
	I0812 23:50:44.400413  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .DriverName
	I0812 23:50:44.400614  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .DriverName
	I0812 23:50:44.400798  820640 main.go:130] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0812 23:50:44.400824  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetState
	I0812 23:50:44.403527  820640 main.go:130] libmachine: Detecting operating system of created instance...
	I0812 23:50:44.403545  820640 main.go:130] libmachine: Waiting for SSH to be available...
	I0812 23:50:44.403554  820640 main.go:130] libmachine: Getting to WaitForSSH function...
	I0812 23:50:44.403569  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHHostname
	I0812 23:50:44.408273  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:50:44.408635  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6c:ed", ip: ""} in network mk-addons-20210812235029-820289: {Iface:virbr1 ExpiryTime:2021-08-13 00:50:44 +0000 UTC Type:0 Mac:52:54:00:33:6c:ed Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:addons-20210812235029-820289 Clientid:01:52:54:00:33:6c:ed}
	I0812 23:50:44.408669  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined IP address 192.168.39.112 and MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:50:44.408792  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHPort
	I0812 23:50:44.408963  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHKeyPath
	I0812 23:50:44.409088  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHKeyPath
	I0812 23:50:44.409184  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHUsername
	I0812 23:50:44.409306  820640 main.go:130] libmachine: Using SSH client type: native
	I0812 23:50:44.409490  820640 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.112 22 <nil> <nil>}
	I0812 23:50:44.409504  820640 main.go:130] libmachine: About to run SSH command:
	exit 0
	I0812 23:50:44.534658  820640 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0812 23:50:44.534691  820640 main.go:130] libmachine: Detecting the provisioner...
	I0812 23:50:44.534701  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHHostname
	I0812 23:50:44.539492  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:50:44.539845  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6c:ed", ip: ""} in network mk-addons-20210812235029-820289: {Iface:virbr1 ExpiryTime:2021-08-13 00:50:44 +0000 UTC Type:0 Mac:52:54:00:33:6c:ed Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:addons-20210812235029-820289 Clientid:01:52:54:00:33:6c:ed}
	I0812 23:50:44.539882  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined IP address 192.168.39.112 and MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:50:44.540045  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHPort
	I0812 23:50:44.540191  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHKeyPath
	I0812 23:50:44.540369  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHKeyPath
	I0812 23:50:44.540491  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHUsername
	I0812 23:50:44.540613  820640 main.go:130] libmachine: Using SSH client type: native
	I0812 23:50:44.540750  820640 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.112 22 <nil> <nil>}
	I0812 23:50:44.540761  820640 main.go:130] libmachine: About to run SSH command:
	cat /etc/os-release
	I0812 23:50:44.668107  820640 main.go:130] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2020.02.12
	ID=buildroot
	VERSION_ID=2020.02.12
	PRETTY_NAME="Buildroot 2020.02.12"
	
	I0812 23:50:44.668168  820640 main.go:130] libmachine: found compatible host: buildroot
	I0812 23:50:44.668177  820640 main.go:130] libmachine: Provisioning with buildroot...
	I0812 23:50:44.668191  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetMachineName
	I0812 23:50:44.668420  820640 buildroot.go:166] provisioning hostname "addons-20210812235029-820289"
	I0812 23:50:44.668445  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetMachineName
	I0812 23:50:44.668633  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHHostname
	I0812 23:50:44.673160  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:50:44.673448  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6c:ed", ip: ""} in network mk-addons-20210812235029-820289: {Iface:virbr1 ExpiryTime:2021-08-13 00:50:44 +0000 UTC Type:0 Mac:52:54:00:33:6c:ed Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:addons-20210812235029-820289 Clientid:01:52:54:00:33:6c:ed}
	I0812 23:50:44.673477  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined IP address 192.168.39.112 and MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:50:44.673642  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHPort
	I0812 23:50:44.673829  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHKeyPath
	I0812 23:50:44.674004  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHKeyPath
	I0812 23:50:44.674137  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHUsername
	I0812 23:50:44.674325  820640 main.go:130] libmachine: Using SSH client type: native
	I0812 23:50:44.674504  820640 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.112 22 <nil> <nil>}
	I0812 23:50:44.674522  820640 main.go:130] libmachine: About to run SSH command:
	sudo hostname addons-20210812235029-820289 && echo "addons-20210812235029-820289" | sudo tee /etc/hostname
	I0812 23:50:44.806806  820640 main.go:130] libmachine: SSH cmd err, output: <nil>: addons-20210812235029-820289
	
	I0812 23:50:44.806829  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHHostname
	I0812 23:50:44.811409  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:50:44.811744  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6c:ed", ip: ""} in network mk-addons-20210812235029-820289: {Iface:virbr1 ExpiryTime:2021-08-13 00:50:44 +0000 UTC Type:0 Mac:52:54:00:33:6c:ed Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:addons-20210812235029-820289 Clientid:01:52:54:00:33:6c:ed}
	I0812 23:50:44.811789  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined IP address 192.168.39.112 and MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:50:44.811899  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHPort
	I0812 23:50:44.812088  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHKeyPath
	I0812 23:50:44.812248  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHKeyPath
	I0812 23:50:44.812363  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHUsername
	I0812 23:50:44.812507  820640 main.go:130] libmachine: Using SSH client type: native
	I0812 23:50:44.812637  820640 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.112 22 <nil> <nil>}
	I0812 23:50:44.812660  820640 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-20210812235029-820289' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-20210812235029-820289/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-20210812235029-820289' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0812 23:50:44.944987  820640 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0812 23:50:44.945022  820640 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem ServerCertRemotePath
:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube}
	I0812 23:50:44.945049  820640 buildroot.go:174] setting up certificates
	I0812 23:50:44.945064  820640 provision.go:83] configureAuth start
	I0812 23:50:44.945076  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetMachineName
	I0812 23:50:44.945320  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetIP
	I0812 23:50:44.949911  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:50:44.950195  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6c:ed", ip: ""} in network mk-addons-20210812235029-820289: {Iface:virbr1 ExpiryTime:2021-08-13 00:50:44 +0000 UTC Type:0 Mac:52:54:00:33:6c:ed Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:addons-20210812235029-820289 Clientid:01:52:54:00:33:6c:ed}
	I0812 23:50:44.950219  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined IP address 192.168.39.112 and MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:50:44.950333  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHHostname
	I0812 23:50:44.954596  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:50:44.954858  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6c:ed", ip: ""} in network mk-addons-20210812235029-820289: {Iface:virbr1 ExpiryTime:2021-08-13 00:50:44 +0000 UTC Type:0 Mac:52:54:00:33:6c:ed Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:addons-20210812235029-820289 Clientid:01:52:54:00:33:6c:ed}
	I0812 23:50:44.954883  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined IP address 192.168.39.112 and MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:50:44.955017  820640 provision.go:137] copyHostCerts
	I0812 23:50:44.955085  820640 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem (1078 bytes)
	I0812 23:50:44.955179  820640 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem (1123 bytes)
	I0812 23:50:44.955227  820640 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem (1679 bytes)
	I0812 23:50:44.955275  820640 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem org=jenkins.addons-20210812235029-820289 san=[192.168.39.112 192.168.39.112 localhost 127.0.0.1 minikube addons-20210812235029-820289]
	I0812 23:50:45.094869  820640 provision.go:171] copyRemoteCerts
	I0812 23:50:45.094920  820640 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0812 23:50:45.094945  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHHostname
	I0812 23:50:45.099502  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:50:45.099793  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6c:ed", ip: ""} in network mk-addons-20210812235029-820289: {Iface:virbr1 ExpiryTime:2021-08-13 00:50:44 +0000 UTC Type:0 Mac:52:54:00:33:6c:ed Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:addons-20210812235029-820289 Clientid:01:52:54:00:33:6c:ed}
	I0812 23:50:45.099819  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined IP address 192.168.39.112 and MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:50:45.099948  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHPort
	I0812 23:50:45.100119  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHKeyPath
	I0812 23:50:45.100264  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHUsername
	I0812 23:50:45.100398  820640 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812235029-820289/id_rsa Username:docker}
	I0812 23:50:45.190667  820640 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0812 23:50:45.206218  820640 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0812 23:50:45.221139  820640 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0812 23:50:45.236251  820640 provision.go:86] duration metric: configureAuth took 291.179655ms
	I0812 23:50:45.236269  820640 buildroot.go:189] setting minikube options for container-runtime
	I0812 23:50:45.236517  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHHostname
	I0812 23:50:45.241903  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:50:45.242224  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6c:ed", ip: ""} in network mk-addons-20210812235029-820289: {Iface:virbr1 ExpiryTime:2021-08-13 00:50:44 +0000 UTC Type:0 Mac:52:54:00:33:6c:ed Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:addons-20210812235029-820289 Clientid:01:52:54:00:33:6c:ed}
	I0812 23:50:45.242259  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined IP address 192.168.39.112 and MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:50:45.242455  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHPort
	I0812 23:50:45.242624  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHKeyPath
	I0812 23:50:45.242799  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHKeyPath
	I0812 23:50:45.242941  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHUsername
	I0812 23:50:45.243113  820640 main.go:130] libmachine: Using SSH client type: native
	I0812 23:50:45.243262  820640 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.112 22 <nil> <nil>}
	I0812 23:50:45.243279  820640 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0812 23:50:45.779830  820640 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0812 23:50:45.779864  820640 main.go:130] libmachine: Checking connection to Docker...
	I0812 23:50:45.779873  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetURL
	I0812 23:50:45.782695  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | Using libvirt version 3000000
	I0812 23:50:45.786947  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:50:45.787259  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6c:ed", ip: ""} in network mk-addons-20210812235029-820289: {Iface:virbr1 ExpiryTime:2021-08-13 00:50:44 +0000 UTC Type:0 Mac:52:54:00:33:6c:ed Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:addons-20210812235029-820289 Clientid:01:52:54:00:33:6c:ed}
	I0812 23:50:45.787298  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined IP address 192.168.39.112 and MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:50:45.787403  820640 main.go:130] libmachine: Docker is up and running!
	I0812 23:50:45.787417  820640 main.go:130] libmachine: Reticulating splines...
	I0812 23:50:45.787424  820640 client.go:171] LocalClient.Create took 16.368563624s
	I0812 23:50:45.787441  820640 start.go:168] duration metric: libmachine.API.Create for "addons-20210812235029-820289" took 16.368624917s
	I0812 23:50:45.787451  820640 start.go:267] post-start starting for "addons-20210812235029-820289" (driver="kvm2")
	I0812 23:50:45.787461  820640 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0812 23:50:45.787484  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .DriverName
	I0812 23:50:45.787699  820640 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0812 23:50:45.787736  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHHostname
	I0812 23:50:45.791773  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:50:45.792067  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6c:ed", ip: ""} in network mk-addons-20210812235029-820289: {Iface:virbr1 ExpiryTime:2021-08-13 00:50:44 +0000 UTC Type:0 Mac:52:54:00:33:6c:ed Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:addons-20210812235029-820289 Clientid:01:52:54:00:33:6c:ed}
	I0812 23:50:45.792090  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined IP address 192.168.39.112 and MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:50:45.792192  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHPort
	I0812 23:50:45.792374  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHKeyPath
	I0812 23:50:45.792556  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHUsername
	I0812 23:50:45.792688  820640 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812235029-820289/id_rsa Username:docker}
	I0812 23:50:45.883675  820640 ssh_runner.go:149] Run: cat /etc/os-release
	I0812 23:50:45.888333  820640 info.go:137] Remote host: Buildroot 2020.02.12
	I0812 23:50:45.888355  820640 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/addons for local assets ...
	I0812 23:50:45.888407  820640 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files for local assets ...
	I0812 23:50:45.888433  820640 start.go:270] post-start completed in 100.972209ms
	I0812 23:50:45.888466  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetConfigRaw
	I0812 23:50:45.888977  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetIP
	I0812 23:50:45.893860  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:50:45.894166  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6c:ed", ip: ""} in network mk-addons-20210812235029-820289: {Iface:virbr1 ExpiryTime:2021-08-13 00:50:44 +0000 UTC Type:0 Mac:52:54:00:33:6c:ed Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:addons-20210812235029-820289 Clientid:01:52:54:00:33:6c:ed}
	I0812 23:50:45.894194  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined IP address 192.168.39.112 and MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:50:45.894410  820640 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235029-820289/config.json ...
	I0812 23:50:45.894596  820640 start.go:129] duration metric: createHost completed in 16.48889006s
	I0812 23:50:45.894613  820640 start.go:80] releasing machines lock for "addons-20210812235029-820289", held for 16.488992911s
	I0812 23:50:45.894647  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .DriverName
	I0812 23:50:45.894811  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetIP
	I0812 23:50:45.898839  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:50:45.899114  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6c:ed", ip: ""} in network mk-addons-20210812235029-820289: {Iface:virbr1 ExpiryTime:2021-08-13 00:50:44 +0000 UTC Type:0 Mac:52:54:00:33:6c:ed Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:addons-20210812235029-820289 Clientid:01:52:54:00:33:6c:ed}
	I0812 23:50:45.899136  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined IP address 192.168.39.112 and MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:50:45.899269  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .DriverName
	I0812 23:50:45.899422  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .DriverName
	I0812 23:50:45.899951  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .DriverName
	I0812 23:50:45.900186  820640 ssh_runner.go:149] Run: systemctl --version
	I0812 23:50:45.900214  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHHostname
	I0812 23:50:45.900228  820640 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0812 23:50:45.900275  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHHostname
	I0812 23:50:45.905946  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:50:45.906238  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6c:ed", ip: ""} in network mk-addons-20210812235029-820289: {Iface:virbr1 ExpiryTime:2021-08-13 00:50:44 +0000 UTC Type:0 Mac:52:54:00:33:6c:ed Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:addons-20210812235029-820289 Clientid:01:52:54:00:33:6c:ed}
	I0812 23:50:45.906259  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:50:45.906335  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined IP address 192.168.39.112 and MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:50:45.906490  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHPort
	I0812 23:50:45.906626  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6c:ed", ip: ""} in network mk-addons-20210812235029-820289: {Iface:virbr1 ExpiryTime:2021-08-13 00:50:44 +0000 UTC Type:0 Mac:52:54:00:33:6c:ed Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:addons-20210812235029-820289 Clientid:01:52:54:00:33:6c:ed}
	I0812 23:50:45.906642  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHKeyPath
	I0812 23:50:45.906661  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined IP address 192.168.39.112 and MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:50:45.906801  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHUsername
	I0812 23:50:45.906810  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHPort
	I0812 23:50:45.906964  820640 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812235029-820289/id_rsa Username:docker}
	I0812 23:50:45.907022  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHKeyPath
	I0812 23:50:45.907148  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHUsername
	I0812 23:50:45.907294  820640 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812235029-820289/id_rsa Username:docker}
	I0812 23:50:46.005225  820640 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0812 23:50:46.005338  820640 ssh_runner.go:149] Run: sudo crictl images --output json
	I0812 23:50:50.008786  820640 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.003414659s)
	I0812 23:50:50.008923  820640 crio.go:420] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.21.3". assuming images are not preloaded.
	I0812 23:50:50.009019  820640 ssh_runner.go:149] Run: which lz4
	I0812 23:50:50.013559  820640 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0812 23:50:50.018026  820640 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0812 23:50:50.018058  820640 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (576184326 bytes)
	I0812 23:50:52.207638  820640 crio.go:362] Took 2.194124 seconds to copy over tarball
	I0812 23:50:52.207728  820640 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0812 23:50:57.749705  820640 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (5.541943928s)
	I0812 23:50:57.749745  820640 crio.go:369] Took 5.542072 seconds t extract the tarball
	I0812 23:50:57.749759  820640 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0812 23:50:57.789479  820640 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0812 23:50:57.801652  820640 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0812 23:50:57.811282  820640 docker.go:153] disabling docker service ...
	I0812 23:50:57.811328  820640 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0812 23:50:57.820701  820640 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0812 23:50:57.829514  820640 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0812 23:50:57.962296  820640 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0812 23:50:58.096211  820640 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0812 23:50:58.106886  820640 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0812 23:50:58.122501  820640 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0812 23:50:58.131507  820640 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0812 23:50:58.138936  820640 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0812 23:50:58.138984  820640 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0812 23:50:58.153476  820640 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0812 23:50:58.160373  820640 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0812 23:50:58.292274  820640 ssh_runner.go:149] Run: sudo systemctl start crio
	I0812 23:50:58.554349  820640 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0812 23:50:58.554432  820640 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0812 23:50:58.561739  820640 start.go:417] Will wait 60s for crictl version
	I0812 23:50:58.561790  820640 ssh_runner.go:149] Run: sudo crictl version
	I0812 23:50:58.593782  820640 start.go:426] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.2
	RuntimeApiVersion:  v1alpha1
	I0812 23:50:58.593870  820640 ssh_runner.go:149] Run: crio --version
	I0812 23:50:58.705923  820640 ssh_runner.go:149] Run: crio --version
	I0812 23:51:00.440154  820640 out.go:177] * Preparing Kubernetes v1.21.3 on CRI-O 1.20.2 ...
	I0812 23:51:00.440299  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetIP
	I0812 23:51:00.445996  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:51:00.446334  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6c:ed", ip: ""} in network mk-addons-20210812235029-820289: {Iface:virbr1 ExpiryTime:2021-08-13 00:50:44 +0000 UTC Type:0 Mac:52:54:00:33:6c:ed Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:addons-20210812235029-820289 Clientid:01:52:54:00:33:6c:ed}
	I0812 23:51:00.446370  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined IP address 192.168.39.112 and MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:51:00.446533  820640 ssh_runner.go:149] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0812 23:51:00.451232  820640 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 23:51:00.461940  820640 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0812 23:51:00.462015  820640 ssh_runner.go:149] Run: sudo crictl images --output json
	I0812 23:51:00.523095  820640 crio.go:424] all images are preloaded for cri-o runtime.
	I0812 23:51:00.523117  820640 crio.go:333] Images already preloaded, skipping extraction
	I0812 23:51:00.523159  820640 ssh_runner.go:149] Run: sudo crictl images --output json
	I0812 23:51:00.558177  820640 crio.go:424] all images are preloaded for cri-o runtime.
	I0812 23:51:00.558206  820640 cache_images.go:74] Images are preloaded, skipping loading
	I0812 23:51:00.558283  820640 ssh_runner.go:149] Run: crio config
	I0812 23:51:00.658973  820640 cni.go:93] Creating CNI manager for ""
	I0812 23:51:00.659002  820640 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0812 23:51:00.659014  820640 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0812 23:51:00.659027  820640 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.112 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-20210812235029-820289 NodeName:addons-20210812235029-820289 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.112"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.39.112 CgroupDriver:systemd ClientCAFile:/va
r/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0812 23:51:00.659195  820640 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.112
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "addons-20210812235029-820289"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.112
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.112"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0812 23:51:00.659351  820640 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --hostname-override=addons-20210812235029-820289 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.112 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:addons-20210812235029-820289 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0812 23:51:00.659412  820640 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0812 23:51:00.666907  820640 binaries.go:44] Found k8s binaries, skipping transfer
	I0812 23:51:00.666965  820640 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0812 23:51:00.673390  820640 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (509 bytes)
	I0812 23:51:00.684674  820640 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0812 23:51:00.695900  820640 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2075 bytes)
	I0812 23:51:00.706963  820640 ssh_runner.go:149] Run: grep 192.168.39.112	control-plane.minikube.internal$ /etc/hosts
	I0812 23:51:00.710827  820640 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.112	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 23:51:00.720785  820640 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235029-820289 for IP: 192.168.39.112
	I0812 23:51:00.720829  820640 certs.go:183] generating minikubeCA CA: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key
	I0812 23:51:00.884795  820640 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt ...
	I0812 23:51:00.884821  820640 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt: {Name:mkf4bbdeb71c664b0474b5f548a7e8e67fa083ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 23:51:00.885016  820640 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key ...
	I0812 23:51:00.885029  820640 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key: {Name:mkaeae298c5a077abebe7683da505bb598ac7cf6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 23:51:00.885113  820640 certs.go:183] generating proxyClientCA CA: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key
	I0812 23:51:01.047054  820640 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.crt ...
	I0812 23:51:01.047089  820640 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.crt: {Name:mk8e489bd4c20b46d292cecc16e423e795dd5395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 23:51:01.047282  820640 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key ...
	I0812 23:51:01.047296  820640 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key: {Name:mk8103e29cc588f3eee3d8b973cb9df98a0a4230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 23:51:01.047416  820640 certs.go:294] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235029-820289/client.key
	I0812 23:51:01.047427  820640 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235029-820289/client.crt with IP's: []
	I0812 23:51:01.147145  820640 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235029-820289/client.crt ...
	I0812 23:51:01.147183  820640 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235029-820289/client.crt: {Name:mk947e5163645be4ec8bb48c3480dc95aa59ceb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 23:51:01.147384  820640 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235029-820289/client.key ...
	I0812 23:51:01.147399  820640 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235029-820289/client.key: {Name:mk71a07635c21483f3ccbbf8c2d4de21ab5e315a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 23:51:01.147496  820640 certs.go:294] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235029-820289/apiserver.key.ae0506ca
	I0812 23:51:01.147507  820640 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235029-820289/apiserver.crt.ae0506ca with IP's: [192.168.39.112 10.96.0.1 127.0.0.1 10.0.0.1]
	I0812 23:51:01.528653  820640 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235029-820289/apiserver.crt.ae0506ca ...
	I0812 23:51:01.528690  820640 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235029-820289/apiserver.crt.ae0506ca: {Name:mk953ee49dcdf968021924622b7731c8a428e9dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 23:51:01.528882  820640 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235029-820289/apiserver.key.ae0506ca ...
	I0812 23:51:01.528894  820640 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235029-820289/apiserver.key.ae0506ca: {Name:mka0b1e902def5ac8a42e38aed893477b6ff65f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 23:51:01.528971  820640 certs.go:305] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235029-820289/apiserver.crt.ae0506ca -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235029-820289/apiserver.crt
	I0812 23:51:01.529030  820640 certs.go:309] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235029-820289/apiserver.key.ae0506ca -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235029-820289/apiserver.key
	I0812 23:51:01.529075  820640 certs.go:294] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235029-820289/proxy-client.key
	I0812 23:51:01.529083  820640 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235029-820289/proxy-client.crt with IP's: []
	I0812 23:51:01.579038  820640 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235029-820289/proxy-client.crt ...
	I0812 23:51:01.579071  820640 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235029-820289/proxy-client.crt: {Name:mkba6e194c0942a748c7a6c921af6a84c59cd314 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 23:51:01.579266  820640 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235029-820289/proxy-client.key ...
	I0812 23:51:01.579280  820640 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235029-820289/proxy-client.key: {Name:mk63926133a5dc2b6122dc15decf394b9f7945d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 23:51:01.579446  820640 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem (1679 bytes)
	I0812 23:51:01.579482  820640 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem (1078 bytes)
	I0812 23:51:01.579509  820640 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem (1123 bytes)
	I0812 23:51:01.579535  820640 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem (1679 bytes)
	I0812 23:51:01.580570  820640 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235029-820289/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0812 23:51:01.598880  820640 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235029-820289/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0812 23:51:01.615056  820640 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235029-820289/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0812 23:51:01.632976  820640 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235029-820289/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0812 23:51:01.650036  820640 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0812 23:51:01.666080  820640 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0812 23:51:01.682096  820640 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0812 23:51:01.698092  820640 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0812 23:51:01.714303  820640 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0812 23:51:01.730504  820640 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0812 23:51:01.742216  820640 ssh_runner.go:149] Run: openssl version
	I0812 23:51:01.747819  820640 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0812 23:51:01.756487  820640 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0812 23:51:01.760907  820640 certs.go:416] hashing: -rw-r--r-- 1 root root 1111 Aug 12 23:51 /usr/share/ca-certificates/minikubeCA.pem
	I0812 23:51:01.760952  820640 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0812 23:51:01.766691  820640 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0812 23:51:01.774391  820640 kubeadm.go:390] StartCluster: {Name:addons-20210812235029-820289 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12122/minikube-v1.22.0-1628238775-12122.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:addons-202108
12235029-820289 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.112 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0812 23:51:01.774476  820640 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0812 23:51:01.774523  820640 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0812 23:51:01.806254  820640 cri.go:76] found id: ""
	I0812 23:51:01.806320  820640 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0812 23:51:01.813335  820640 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0812 23:51:01.819646  820640 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 23:51:01.826223  820640 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0812 23:51:01.826262  820640 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0812 23:51:02.283531  820640 out.go:204]   - Generating certificates and keys ...
	I0812 23:51:05.119517  820640 out.go:204]   - Booting up control plane ...
	I0812 23:51:21.717085  820640 out.go:204]   - Configuring RBAC rules ...
	I0812 23:51:22.266574  820640 cni.go:93] Creating CNI manager for ""
	I0812 23:51:22.266602  820640 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0812 23:51:22.268322  820640 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0812 23:51:22.268398  820640 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0812 23:51:22.280038  820640 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0812 23:51:22.306606  820640 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0812 23:51:22.306663  820640 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 23:51:22.306682  820640 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=dc1c3ca26e9449ce488a773126b8450402c94a19 minikube.k8s.io/name=addons-20210812235029-820289 minikube.k8s.io/updated_at=2021_08_12T23_51_22_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 23:51:22.361075  820640 ops.go:34] apiserver oom_adj: -16
	I0812 23:51:22.591388  820640 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 23:51:23.218630  820640 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 23:51:23.718580  820640 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 23:51:24.218343  820640 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 23:51:24.718199  820640 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 23:51:25.218078  820640 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 23:51:25.718542  820640 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 23:51:26.218901  820640 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 23:51:26.718253  820640 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 23:51:27.218816  820640 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 23:51:27.719030  820640 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 23:51:28.749053  820640 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.029971137s)
	I0812 23:51:29.218632  820640 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 23:51:29.718977  820640 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 23:51:30.218096  820640 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 23:51:30.718150  820640 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 23:51:31.218338  820640 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 23:51:31.718222  820640 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 23:51:32.218954  820640 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 23:51:32.718327  820640 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 23:51:33.218045  820640 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 23:51:33.718058  820640 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 23:51:34.218629  820640 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 23:51:34.718115  820640 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 23:51:35.218535  820640 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 23:51:35.342012  820640 kubeadm.go:985] duration metric: took 13.035405726s to wait for elevateKubeSystemPrivileges.
	I0812 23:51:35.342044  820640 kubeadm.go:392] StartCluster complete in 33.567661329s
	I0812 23:51:35.342070  820640 settings.go:142] acquiring lock: {Name:mk8798f78c6f0a1d20052a3e99a18e56ee754eb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 23:51:35.342229  820640 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0812 23:51:35.342676  820640 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig: {Name:mk56dc63045ab5614dcc5cc2eaf1f7d3442c655e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 23:51:35.874652  820640 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "addons-20210812235029-820289" rescaled to 1
	I0812 23:51:35.874729  820640 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.39.112 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0812 23:51:35.877130  820640 out.go:177] * Verifying Kubernetes components...
	I0812 23:51:35.874770  820640 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0812 23:51:35.877206  820640 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0812 23:51:35.874794  820640 addons.go:342] enableAddons start: toEnable=map[], additional=[registry metrics-server olm volumesnapshots csi-hostpath-driver ingress helm-tiller]
	I0812 23:51:35.877300  820640 addons.go:59] Setting ingress=true in profile "addons-20210812235029-820289"
	I0812 23:51:35.877313  820640 addons.go:59] Setting storage-provisioner=true in profile "addons-20210812235029-820289"
	I0812 23:51:35.877314  820640 addons.go:59] Setting metrics-server=true in profile "addons-20210812235029-820289"
	I0812 23:51:35.877320  820640 addons.go:59] Setting registry=true in profile "addons-20210812235029-820289"
	I0812 23:51:35.877326  820640 addons.go:135] Setting addon storage-provisioner=true in "addons-20210812235029-820289"
	I0812 23:51:35.877332  820640 addons.go:135] Setting addon metrics-server=true in "addons-20210812235029-820289"
	W0812 23:51:35.877336  820640 addons.go:147] addon storage-provisioner should already be in state true
	I0812 23:51:35.877336  820640 addons.go:59] Setting helm-tiller=true in profile "addons-20210812235029-820289"
	I0812 23:51:35.877338  820640 addons.go:135] Setting addon registry=true in "addons-20210812235029-820289"
	I0812 23:51:35.877348  820640 addons.go:135] Setting addon helm-tiller=true in "addons-20210812235029-820289"
	I0812 23:51:35.877360  820640 addons.go:59] Setting csi-hostpath-driver=true in profile "addons-20210812235029-820289"
	I0812 23:51:35.877370  820640 host.go:66] Checking if "addons-20210812235029-820289" exists ...
	I0812 23:51:35.877325  820640 addons.go:135] Setting addon ingress=true in "addons-20210812235029-820289"
	I0812 23:51:35.877390  820640 host.go:66] Checking if "addons-20210812235029-820289" exists ...
	I0812 23:51:35.877411  820640 addons.go:135] Setting addon csi-hostpath-driver=true in "addons-20210812235029-820289"
	I0812 23:51:35.877343  820640 addons.go:59] Setting default-storageclass=true in profile "addons-20210812235029-820289"
	I0812 23:51:35.877462  820640 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-20210812235029-820289"
	I0812 23:51:35.877363  820640 host.go:66] Checking if "addons-20210812235029-820289" exists ...
	I0812 23:51:35.877370  820640 host.go:66] Checking if "addons-20210812235029-820289" exists ...
	I0812 23:51:35.877958  820640 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 23:51:35.877976  820640 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 23:51:35.877301  820640 addons.go:59] Setting volumesnapshots=true in profile "addons-20210812235029-820289"
	I0812 23:51:35.878004  820640 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0812 23:51:35.877959  820640 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 23:51:35.877363  820640 host.go:66] Checking if "addons-20210812235029-820289" exists ...
	I0812 23:51:35.878040  820640 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0812 23:51:35.878004  820640 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 23:51:35.878121  820640 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0812 23:51:35.878125  820640 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0812 23:51:35.877446  820640 host.go:66] Checking if "addons-20210812235029-820289" exists ...
	I0812 23:51:35.878010  820640 addons.go:135] Setting addon volumesnapshots=true in "addons-20210812235029-820289"
	I0812 23:51:35.877302  820640 addons.go:59] Setting olm=true in profile "addons-20210812235029-820289"
	I0812 23:51:35.877963  820640 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 23:51:35.878357  820640 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0812 23:51:35.878364  820640 addons.go:135] Setting addon olm=true in "addons-20210812235029-820289"
	I0812 23:51:35.878382  820640 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 23:51:35.878405  820640 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0812 23:51:35.878292  820640 host.go:66] Checking if "addons-20210812235029-820289" exists ...
	I0812 23:51:35.878434  820640 host.go:66] Checking if "addons-20210812235029-820289" exists ...
	I0812 23:51:35.878620  820640 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 23:51:35.878659  820640 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0812 23:51:35.878896  820640 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 23:51:35.878943  820640 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0812 23:51:35.878956  820640 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 23:51:35.878998  820640 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0812 23:51:35.891845  820640 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35731
	I0812 23:51:35.891962  820640 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:46847
	I0812 23:51:35.892204  820640 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42779
	I0812 23:51:35.892356  820640 main.go:130] libmachine: () Calling .GetVersion
	I0812 23:51:35.892485  820640 main.go:130] libmachine: () Calling .GetVersion
	I0812 23:51:35.892593  820640 main.go:130] libmachine: () Calling .GetVersion
	I0812 23:51:35.893071  820640 main.go:130] libmachine: Using API Version  1
	I0812 23:51:35.893078  820640 main.go:130] libmachine: Using API Version  1
	I0812 23:51:35.893089  820640 main.go:130] libmachine: () Calling .SetConfigRaw
	I0812 23:51:35.893101  820640 main.go:130] libmachine: () Calling .SetConfigRaw
	I0812 23:51:35.893207  820640 main.go:130] libmachine: Using API Version  1
	I0812 23:51:35.893223  820640 main.go:130] libmachine: () Calling .SetConfigRaw
	I0812 23:51:35.893529  820640 main.go:130] libmachine: () Calling .GetMachineName
	I0812 23:51:35.893588  820640 main.go:130] libmachine: () Calling .GetMachineName
	I0812 23:51:35.894108  820640 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 23:51:35.894147  820640 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0812 23:51:35.894316  820640 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41853
	I0812 23:51:35.894342  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetState
	I0812 23:51:35.894565  820640 main.go:130] libmachine: () Calling .GetMachineName
	I0812 23:51:35.894767  820640 main.go:130] libmachine: () Calling .GetVersion
	I0812 23:51:35.895116  820640 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 23:51:35.895154  820640 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0812 23:51:35.895785  820640 main.go:130] libmachine: Using API Version  1
	I0812 23:51:35.895809  820640 main.go:130] libmachine: () Calling .SetConfigRaw
	I0812 23:51:35.896358  820640 main.go:130] libmachine: () Calling .GetMachineName
	I0812 23:51:35.897079  820640 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 23:51:35.897119  820640 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0812 23:51:35.902202  820640 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43865
	I0812 23:51:35.902641  820640 main.go:130] libmachine: () Calling .GetVersion
	I0812 23:51:35.903127  820640 main.go:130] libmachine: Using API Version  1
	I0812 23:51:35.903143  820640 main.go:130] libmachine: () Calling .SetConfigRaw
	I0812 23:51:35.903472  820640 main.go:130] libmachine: () Calling .GetMachineName
	I0812 23:51:35.904074  820640 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 23:51:35.904113  820640 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0812 23:51:35.906917  820640 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40105
	I0812 23:51:35.907303  820640 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44419
	I0812 23:51:35.907330  820640 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45211
	I0812 23:51:35.907457  820640 main.go:130] libmachine: () Calling .GetVersion
	I0812 23:51:35.907667  820640 main.go:130] libmachine: () Calling .GetVersion
	I0812 23:51:35.907684  820640 main.go:130] libmachine: () Calling .GetVersion
	I0812 23:51:35.907919  820640 main.go:130] libmachine: Using API Version  1
	I0812 23:51:35.907943  820640 main.go:130] libmachine: () Calling .SetConfigRaw
	I0812 23:51:35.908095  820640 main.go:130] libmachine: Using API Version  1
	I0812 23:51:35.908115  820640 main.go:130] libmachine: () Calling .SetConfigRaw
	I0812 23:51:35.908143  820640 main.go:130] libmachine: Using API Version  1
	I0812 23:51:35.908166  820640 main.go:130] libmachine: () Calling .SetConfigRaw
	I0812 23:51:35.908271  820640 main.go:130] libmachine: () Calling .GetMachineName
	I0812 23:51:35.908453  820640 main.go:130] libmachine: () Calling .GetMachineName
	I0812 23:51:35.908522  820640 main.go:130] libmachine: () Calling .GetMachineName
	I0812 23:51:35.908580  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetState
	I0812 23:51:35.908887  820640 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 23:51:35.908935  820640 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0812 23:51:35.911744  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .DriverName
	I0812 23:51:35.914215  820640 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 23:51:35.914328  820640 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 23:51:35.914345  820640 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0812 23:51:35.914365  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHHostname
	I0812 23:51:35.914373  820640 addons.go:135] Setting addon default-storageclass=true in "addons-20210812235029-820289"
	W0812 23:51:35.914384  820640 addons.go:147] addon default-storageclass should already be in state true
	I0812 23:51:35.914417  820640 host.go:66] Checking if "addons-20210812235029-820289" exists ...
	I0812 23:51:35.917825  820640 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35131
	I0812 23:51:35.920468  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:51:35.920947  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6c:ed", ip: ""} in network mk-addons-20210812235029-820289: {Iface:virbr1 ExpiryTime:2021-08-13 00:50:44 +0000 UTC Type:0 Mac:52:54:00:33:6c:ed Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:addons-20210812235029-820289 Clientid:01:52:54:00:33:6c:ed}
	I0812 23:51:35.920988  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined IP address 192.168.39.112 and MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:51:35.921164  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHPort
	I0812 23:51:35.921322  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHKeyPath
	I0812 23:51:35.921465  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHUsername
	I0812 23:51:35.921592  820640 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812235029-820289/id_rsa Username:docker}
	I0812 23:51:35.923369  820640 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41357
	I0812 23:51:35.923578  820640 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37337
	I0812 23:51:35.932846  820640 main.go:130] libmachine: () Calling .GetVersion
	I0812 23:51:35.932892  820640 main.go:130] libmachine: () Calling .GetVersion
	I0812 23:51:35.933228  820640 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 23:51:35.933281  820640 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0812 23:51:35.933309  820640 main.go:130] libmachine: () Calling .GetVersion
	I0812 23:51:35.933487  820640 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 23:51:35.933534  820640 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0812 23:51:35.934783  820640 main.go:130] libmachine: Using API Version  1
	I0812 23:51:35.934803  820640 main.go:130] libmachine: () Calling .SetConfigRaw
	I0812 23:51:35.934964  820640 main.go:130] libmachine: Using API Version  1
	I0812 23:51:35.934982  820640 main.go:130] libmachine: () Calling .SetConfigRaw
	I0812 23:51:35.935106  820640 main.go:130] libmachine: Using API Version  1
	I0812 23:51:35.935118  820640 main.go:130] libmachine: () Calling .SetConfigRaw
	I0812 23:51:35.935614  820640 main.go:130] libmachine: () Calling .GetMachineName
	I0812 23:51:35.935625  820640 main.go:130] libmachine: () Calling .GetMachineName
	I0812 23:51:35.935618  820640 main.go:130] libmachine: () Calling .GetMachineName
	I0812 23:51:35.935864  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetState
	I0812 23:51:35.936366  820640 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 23:51:35.936410  820640 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0812 23:51:35.936505  820640 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 23:51:35.936575  820640 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0812 23:51:35.939043  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .DriverName
	I0812 23:51:35.942844  820640 out.go:177]   - Using image gcr.io/kubernetes-helm/tiller:v2.16.12
	I0812 23:51:35.942944  820640 addons.go:275] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0812 23:51:35.942955  820640 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2433 bytes)
	I0812 23:51:35.942978  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHHostname
	I0812 23:51:35.947433  820640 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35061
	I0812 23:51:35.947574  820640 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34513
	I0812 23:51:35.947913  820640 main.go:130] libmachine: () Calling .GetVersion
	I0812 23:51:35.948041  820640 main.go:130] libmachine: () Calling .GetVersion
	I0812 23:51:35.948445  820640 main.go:130] libmachine: Using API Version  1
	I0812 23:51:35.948466  820640 main.go:130] libmachine: () Calling .SetConfigRaw
	I0812 23:51:35.948553  820640 main.go:130] libmachine: Using API Version  1
	I0812 23:51:35.948571  820640 main.go:130] libmachine: () Calling .SetConfigRaw
	I0812 23:51:35.948864  820640 main.go:130] libmachine: () Calling .GetMachineName
	I0812 23:51:35.948919  820640 main.go:130] libmachine: () Calling .GetMachineName
	I0812 23:51:35.949031  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetState
	I0812 23:51:35.949082  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetState
	I0812 23:51:35.949563  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:51:35.949727  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6c:ed", ip: ""} in network mk-addons-20210812235029-820289: {Iface:virbr1 ExpiryTime:2021-08-13 00:50:44 +0000 UTC Type:0 Mac:52:54:00:33:6c:ed Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:addons-20210812235029-820289 Clientid:01:52:54:00:33:6c:ed}
	I0812 23:51:35.949797  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined IP address 192.168.39.112 and MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:51:35.950045  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHPort
	I0812 23:51:35.950206  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHKeyPath
	I0812 23:51:35.950360  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHUsername
	I0812 23:51:35.950500  820640 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812235029-820289/id_rsa Username:docker}
	I0812 23:51:35.953894  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .DriverName
	I0812 23:51:35.954021  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .DriverName
	I0812 23:51:35.956187  820640 out.go:177]   - Using image k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0
	I0812 23:51:35.957955  820640 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0
	I0812 23:51:35.955620  820640 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34151
	I0812 23:51:35.956261  820640 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0812 23:51:35.957599  820640 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33643
	I0812 23:51:35.958445  820640 main.go:130] libmachine: () Calling .GetVersion
	I0812 23:51:35.959210  820640 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35495
	I0812 23:51:35.959811  820640 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0812 23:51:35.961995  820640 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0812 23:51:35.962049  820640 addons.go:275] installing /etc/kubernetes/addons/ingress-configmap.yaml
	I0812 23:51:35.962059  820640 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-configmap.yaml (1865 bytes)
	I0812 23:51:35.959928  820640 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0812 23:51:35.962081  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHHostname
	I0812 23:51:35.962099  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHHostname
	I0812 23:51:35.960366  820640 main.go:130] libmachine: Using API Version  1
	I0812 23:51:35.962134  820640 main.go:130] libmachine: () Calling .SetConfigRaw
	I0812 23:51:35.960414  820640 main.go:130] libmachine: () Calling .GetVersion
	I0812 23:51:35.960530  820640 main.go:130] libmachine: () Calling .GetVersion
	I0812 23:51:35.963017  820640 main.go:130] libmachine: () Calling .GetMachineName
	I0812 23:51:35.963082  820640 main.go:130] libmachine: Using API Version  1
	I0812 23:51:35.963103  820640 main.go:130] libmachine: () Calling .SetConfigRaw
	I0812 23:51:35.963174  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetState
	I0812 23:51:35.963481  820640 main.go:130] libmachine: () Calling .GetMachineName
	I0812 23:51:35.963727  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetState
	I0812 23:51:35.963925  820640 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44615
	I0812 23:51:35.964147  820640 main.go:130] libmachine: Using API Version  1
	I0812 23:51:35.964159  820640 main.go:130] libmachine: () Calling .SetConfigRaw
	I0812 23:51:35.964508  820640 main.go:130] libmachine: () Calling .GetMachineName
	I0812 23:51:35.964530  820640 main.go:130] libmachine: () Calling .GetVersion
	I0812 23:51:35.965101  820640 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 23:51:35.965137  820640 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0812 23:51:35.965696  820640 main.go:130] libmachine: Using API Version  1
	I0812 23:51:35.965712  820640 main.go:130] libmachine: () Calling .SetConfigRaw
	I0812 23:51:35.966290  820640 main.go:130] libmachine: () Calling .GetMachineName
	I0812 23:51:35.966463  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetState
	I0812 23:51:35.968359  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .DriverName
	I0812 23:51:35.970474  820640 out.go:177]   - Using image registry:2.7.1
	I0812 23:51:35.969565  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .DriverName
	I0812 23:51:35.970991  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .DriverName
	I0812 23:51:35.973093  820640 out.go:177]   - Using image k8s.gcr.io/metrics-server/metrics-server:v0.4.2
	I0812 23:51:35.974712  820640 out.go:177]   - Using image gcr.io/google_containers/kube-registry-proxy:0.4
	I0812 23:51:35.976355  820640 out.go:177]   - Using image k8s.gcr.io/sig-storage/hostpathplugin:v1.6.0
	I0812 23:51:35.971471  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:51:35.972100  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHPort
	I0812 23:51:35.972400  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:51:35.972909  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHPort
	I0812 23:51:35.973149  820640 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0812 23:51:35.973814  820640 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:36875
	I0812 23:51:35.974802  820640 addons.go:275] installing /etc/kubernetes/addons/registry-rc.yaml
	I0812 23:51:35.976484  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6c:ed", ip: ""} in network mk-addons-20210812235029-820289: {Iface:virbr1 ExpiryTime:2021-08-13 00:50:44 +0000 UTC Type:0 Mac:52:54:00:33:6c:ed Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:addons-20210812235029-820289 Clientid:01:52:54:00:33:6c:ed}
	I0812 23:51:35.978052  820640 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0
	I0812 23:51:35.978105  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6c:ed", ip: ""} in network mk-addons-20210812235029-820289: {Iface:virbr1 ExpiryTime:2021-08-13 00:50:44 +0000 UTC Type:0 Mac:52:54:00:33:6c:ed Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:addons-20210812235029-820289 Clientid:01:52:54:00:33:6c:ed}
	I0812 23:51:35.978122  820640 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (788 bytes)
	I0812 23:51:35.979807  820640 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-attacher:v3.1.0
	I0812 23:51:35.978142  820640 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:38397
	I0812 23:51:35.978141  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined IP address 192.168.39.112 and MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:51:35.978147  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHHostname
	I0812 23:51:35.978157  820640 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0812 23:51:35.978168  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined IP address 192.168.39.112 and MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:51:35.978267  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHKeyPath
	I0812 23:51:35.978273  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHKeyPath
	I0812 23:51:35.978518  820640 main.go:130] libmachine: () Calling .GetVersion
	I0812 23:51:35.980310  820640 main.go:130] libmachine: () Calling .GetVersion
	I0812 23:51:35.981463  820640 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-agent:v0.2.0
	I0812 23:51:35.981486  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHHostname
	I0812 23:51:35.983097  820640 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-controller:v0.2.0
	I0812 23:51:35.982343  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHUsername
	I0812 23:51:35.985135  820640 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1
	I0812 23:51:35.982357  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHUsername
	I0812 23:51:35.982395  820640 main.go:130] libmachine: Using API Version  1
	I0812 23:51:35.982409  820640 main.go:130] libmachine: Using API Version  1
	I0812 23:51:35.983355  820640 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812235029-820289/id_rsa Username:docker}
	I0812 23:51:35.985468  820640 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812235029-820289/id_rsa Username:docker}
	I0812 23:51:35.986862  820640 main.go:130] libmachine: () Calling .SetConfigRaw
	I0812 23:51:35.986879  820640 main.go:130] libmachine: () Calling .SetConfigRaw
	I0812 23:51:35.986945  820640 out.go:177]   - Using image k8s.gcr.io/sig-storage/livenessprobe:v2.2.0
	I0812 23:51:35.989108  820640 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-resizer:v1.1.0
	I0812 23:51:35.987301  820640 main.go:130] libmachine: () Calling .GetMachineName
	I0812 23:51:35.987316  820640 main.go:130] libmachine: () Calling .GetMachineName
	I0812 23:51:35.989924  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:51:35.990100  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:51:35.990491  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHPort
	I0812 23:51:35.990763  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHPort
	I0812 23:51:35.991514  820640 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0
	I0812 23:51:35.991570  820640 addons.go:275] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0812 23:51:35.991585  820640 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0812 23:51:35.991588  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6c:ed", ip: ""} in network mk-addons-20210812235029-820289: {Iface:virbr1 ExpiryTime:2021-08-13 00:50:44 +0000 UTC Type:0 Mac:52:54:00:33:6c:ed Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:addons-20210812235029-820289 Clientid:01:52:54:00:33:6c:ed}
	I0812 23:51:35.991596  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6c:ed", ip: ""} in network mk-addons-20210812235029-820289: {Iface:virbr1 ExpiryTime:2021-08-13 00:50:44 +0000 UTC Type:0 Mac:52:54:00:33:6c:ed Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:addons-20210812235029-820289 Clientid:01:52:54:00:33:6c:ed}
	I0812 23:51:35.991603  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHHostname
	I0812 23:51:35.991622  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined IP address 192.168.39.112 and MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:51:35.991627  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined IP address 192.168.39.112 and MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:51:35.991683  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHKeyPath
	I0812 23:51:35.991759  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetState
	I0812 23:51:35.991807  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHKeyPath
	I0812 23:51:35.991832  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHUsername
	I0812 23:51:35.991809  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetState
	I0812 23:51:35.991967  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHUsername
	I0812 23:51:35.991993  820640 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812235029-820289/id_rsa Username:docker}
	I0812 23:51:35.992121  820640 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812235029-820289/id_rsa Username:docker}
	I0812 23:51:35.996116  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .DriverName
	I0812 23:51:35.996345  820640 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0812 23:51:35.996523  820640 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0812 23:51:35.996551  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHHostname
	I0812 23:51:35.996871  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .DriverName
	I0812 23:51:35.999096  820640 out.go:177]   - Using image quay.io/operator-framework/olm:v0.17.0
	I0812 23:51:35.998809  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:51:36.000784  820640 out.go:177]   - Using image quay.io/operator-framework/upstream-community-operators:07bbc13
	I0812 23:51:35.999223  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6c:ed", ip: ""} in network mk-addons-20210812235029-820289: {Iface:virbr1 ExpiryTime:2021-08-13 00:50:44 +0000 UTC Type:0 Mac:52:54:00:33:6c:ed Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:addons-20210812235029-820289 Clientid:01:52:54:00:33:6c:ed}
	I0812 23:51:35.999326  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHPort
	I0812 23:51:36.000913  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined IP address 192.168.39.112 and MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:51:36.001088  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHKeyPath
	I0812 23:51:36.001284  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHUsername
	I0812 23:51:36.001433  820640 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812235029-820289/id_rsa Username:docker}
	I0812 23:51:36.002417  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:51:36.002815  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6c:ed", ip: ""} in network mk-addons-20210812235029-820289: {Iface:virbr1 ExpiryTime:2021-08-13 00:50:44 +0000 UTC Type:0 Mac:52:54:00:33:6c:ed Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:addons-20210812235029-820289 Clientid:01:52:54:00:33:6c:ed}
	I0812 23:51:36.002846  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined IP address 192.168.39.112 and MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:51:36.002985  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHPort
	I0812 23:51:36.003178  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHKeyPath
	I0812 23:51:36.003369  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHUsername
	I0812 23:51:36.003519  820640 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812235029-820289/id_rsa Username:docker}
	I0812 23:51:36.022003  820640 addons.go:275] installing /etc/kubernetes/addons/crds.yaml
	I0812 23:51:36.022079  820640 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/crds.yaml (825331 bytes)
	I0812 23:51:36.022305  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHHostname
	I0812 23:51:36.028062  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:51:36.028467  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:6c:ed", ip: ""} in network mk-addons-20210812235029-820289: {Iface:virbr1 ExpiryTime:2021-08-13 00:50:44 +0000 UTC Type:0 Mac:52:54:00:33:6c:ed Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:addons-20210812235029-820289 Clientid:01:52:54:00:33:6c:ed}
	I0812 23:51:36.028500  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | domain addons-20210812235029-820289 has defined IP address 192.168.39.112 and MAC address 52:54:00:33:6c:ed in network mk-addons-20210812235029-820289
	I0812 23:51:36.028610  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHPort
	I0812 23:51:36.028797  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHKeyPath
	I0812 23:51:36.028932  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .GetSSHUsername
	I0812 23:51:36.029049  820640 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812235029-820289/id_rsa Username:docker}
	I0812 23:51:36.316758  820640 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 23:51:36.387490  820640 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0812 23:51:36.387521  820640 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1931 bytes)
	I0812 23:51:36.421206  820640 addons.go:275] installing /etc/kubernetes/addons/ingress-rbac.yaml
	I0812 23:51:36.421237  820640 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-rbac.yaml (6005 bytes)
	I0812 23:51:36.496619  820640 addons.go:275] installing /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml
	I0812 23:51:36.496645  820640 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml (2203 bytes)
	I0812 23:51:36.499586  820640 addons.go:275] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0812 23:51:36.499604  820640 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0812 23:51:36.534461  820640 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0812 23:51:36.534486  820640 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0812 23:51:36.539405  820640 addons.go:275] installing /etc/kubernetes/addons/ingress-dp.yaml
	I0812 23:51:36.539423  820640 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-dp.yaml (9394 bytes)
	I0812 23:51:36.550333  820640 addons.go:275] installing /etc/kubernetes/addons/registry-svc.yaml
	I0812 23:51:36.550348  820640 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0812 23:51:36.552528  820640 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0812 23:51:36.552543  820640 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0812 23:51:36.563400  820640 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0812 23:51:36.571579  820640 addons.go:275] installing /etc/kubernetes/addons/olm.yaml
	I0812 23:51:36.571605  820640 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/olm.yaml (9882 bytes)
	I0812 23:51:36.584480  820640 addons.go:275] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0812 23:51:36.584500  820640 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0812 23:51:36.596692  820640 addons.go:275] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0812 23:51:36.596716  820640 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3037 bytes)
	I0812 23:51:36.602996  820640 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml
	I0812 23:51:36.603422  820640 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0812 23:51:36.603447  820640 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0812 23:51:36.634432  820640 addons.go:275] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0812 23:51:36.634454  820640 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (950 bytes)
	I0812 23:51:36.634525  820640 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
	I0812 23:51:36.637944  820640 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0812 23:51:36.637960  820640 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0812 23:51:36.662650  820640 addons.go:275] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0812 23:51:36.662667  820640 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (3666 bytes)
	I0812 23:51:36.675112  820640 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0812 23:51:36.691344  820640 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0812 23:51:36.710594  820640 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0812 23:51:36.710617  820640 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19584 bytes)
	I0812 23:51:36.719423  820640 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0812 23:51:36.743618  820640 addons.go:275] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0812 23:51:36.743644  820640 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2944 bytes)
	I0812 23:51:36.766547  820640 addons.go:275] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0812 23:51:36.766568  820640 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3428 bytes)
	I0812 23:51:36.839258  820640 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0812 23:51:36.841589  820640 node_ready.go:35] waiting up to 6m0s for node "addons-20210812235029-820289" to be "Ready" ...
	I0812 23:51:36.846254  820640 node_ready.go:49] node "addons-20210812235029-820289" has status "Ready":"True"
	I0812 23:51:36.846277  820640 node_ready.go:38] duration metric: took 4.6551ms waiting for node "addons-20210812235029-820289" to be "Ready" ...
	I0812 23:51:36.846288  820640 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 23:51:36.855856  820640 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-gqmqd" in "kube-system" namespace to be "Ready" ...
	I0812 23:51:36.941151  820640 addons.go:275] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0812 23:51:36.941188  820640 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3194 bytes)
	I0812 23:51:36.998838  820640 addons.go:275] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0812 23:51:36.998869  820640 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1071 bytes)
	I0812 23:51:37.082820  820640 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0812 23:51:37.224534  820640 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0812 23:51:37.224578  820640 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2421 bytes)
	I0812 23:51:37.303675  820640 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0812 23:51:37.303732  820640 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1034 bytes)
	I0812 23:51:37.563112  820640 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0812 23:51:37.563140  820640 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (6710 bytes)
	I0812 23:51:37.719843  820640 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-provisioner.yaml
	I0812 23:51:37.719875  820640 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-provisioner.yaml (2555 bytes)
	I0812 23:51:37.756434  820640 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0812 23:51:37.756463  820640 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2469 bytes)
	I0812 23:51:37.881498  820640 pod_ready.go:97] error getting pod "coredns-558bd4d5db-gqmqd" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-gqmqd" not found
	I0812 23:51:37.881529  820640 pod_ready.go:81] duration metric: took 1.025651181s waiting for pod "coredns-558bd4d5db-gqmqd" in "kube-system" namespace to be "Ready" ...
	E0812 23:51:37.881540  820640 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-558bd4d5db-gqmqd" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-gqmqd" not found
	I0812 23:51:37.881548  820640 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-nrmhx" in "kube-system" namespace to be "Ready" ...
	I0812 23:51:38.210967  820640 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml
	I0812 23:51:38.211004  820640 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml (2555 bytes)
	I0812 23:51:38.521403  820640 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0812 23:51:38.521428  820640 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0812 23:51:38.556382  820640 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.239580627s)
	I0812 23:51:38.556432  820640 main.go:130] libmachine: Making call to close driver server
	I0812 23:51:38.556454  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .Close
	I0812 23:51:38.556811  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | Closing plugin on server side
	I0812 23:51:38.556883  820640 main.go:130] libmachine: Successfully made call to close driver server
	I0812 23:51:38.556908  820640 main.go:130] libmachine: Making call to close connection to plugin binary
	I0812 23:51:38.556921  820640 main.go:130] libmachine: Making call to close driver server
	I0812 23:51:38.556934  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .Close
	I0812 23:51:38.557247  820640 main.go:130] libmachine: Successfully made call to close driver server
	I0812 23:51:38.557261  820640 main.go:130] libmachine: Making call to close connection to plugin binary
	I0812 23:51:38.557270  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | Closing plugin on server side
	I0812 23:51:38.624059  820640 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0812 23:51:39.683796  820640 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.120358076s)
	I0812 23:51:39.683861  820640 main.go:130] libmachine: Making call to close driver server
	I0812 23:51:39.683875  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .Close
	I0812 23:51:39.684155  820640 main.go:130] libmachine: Successfully made call to close driver server
	I0812 23:51:39.684217  820640 main.go:130] libmachine: Making call to close connection to plugin binary
	I0812 23:51:39.684232  820640 main.go:130] libmachine: Making call to close driver server
	I0812 23:51:39.684243  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .Close
	I0812 23:51:39.684186  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | Closing plugin on server side
	I0812 23:51:39.684509  820640 main.go:130] libmachine: Successfully made call to close driver server
	I0812 23:51:39.684543  820640 main.go:130] libmachine: Making call to close connection to plugin binary
	I0812 23:51:39.684569  820640 main.go:130] libmachine: Making call to close driver server
	I0812 23:51:39.684584  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .Close
	I0812 23:51:39.685596  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | Closing plugin on server side
	I0812 23:51:39.685620  820640 main.go:130] libmachine: Successfully made call to close driver server
	I0812 23:51:39.685629  820640 main.go:130] libmachine: Making call to close connection to plugin binary
	I0812 23:51:39.893363  820640 pod_ready.go:102] pod "coredns-558bd4d5db-nrmhx" in "kube-system" namespace has status "Ready":"False"
	I0812 23:51:41.210755  820640 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml: (4.60771012s)
	I0812 23:51:41.210820  820640 main.go:130] libmachine: Making call to close driver server
	I0812 23:51:41.210840  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .Close
	I0812 23:51:41.211131  820640 main.go:130] libmachine: Successfully made call to close driver server
	I0812 23:51:41.211258  820640 main.go:130] libmachine: Making call to close connection to plugin binary
	I0812 23:51:41.211274  820640 main.go:130] libmachine: Making call to close driver server
	I0812 23:51:41.211285  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .Close
	I0812 23:51:41.211215  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | Closing plugin on server side
	I0812 23:51:41.212576  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | Closing plugin on server side
	I0812 23:51:41.212891  820640 main.go:130] libmachine: Successfully made call to close driver server
	I0812 23:51:41.212915  820640 main.go:130] libmachine: Making call to close connection to plugin binary
	I0812 23:51:41.212929  820640 addons.go:313] Verifying addon ingress=true in "addons-20210812235029-820289"
	I0812 23:51:41.214878  820640 out.go:177] * Verifying ingress addon...
	I0812 23:51:41.217704  820640 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0812 23:51:41.266111  820640 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0812 23:51:41.266129  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:51:41.800039  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:51:41.946399  820640 pod_ready.go:102] pod "coredns-558bd4d5db-nrmhx" in "kube-system" namespace has status "Ready":"False"
	I0812 23:51:42.287989  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:51:42.841317  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:51:43.430488  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:51:43.789021  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:51:44.333934  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:51:44.530028  820640 pod_ready.go:92] pod "coredns-558bd4d5db-nrmhx" in "kube-system" namespace has status "Ready":"True"
	I0812 23:51:44.530059  820640 pod_ready.go:81] duration metric: took 6.648505212s waiting for pod "coredns-558bd4d5db-nrmhx" in "kube-system" namespace to be "Ready" ...
	I0812 23:51:44.530069  820640 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-20210812235029-820289" in "kube-system" namespace to be "Ready" ...
	I0812 23:51:44.745494  820640 pod_ready.go:92] pod "etcd-addons-20210812235029-820289" in "kube-system" namespace has status "Ready":"True"
	I0812 23:51:44.745513  820640 pod_ready.go:81] duration metric: took 215.437448ms waiting for pod "etcd-addons-20210812235029-820289" in "kube-system" namespace to be "Ready" ...
	I0812 23:51:44.745524  820640 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-20210812235029-820289" in "kube-system" namespace to be "Ready" ...
	I0812 23:51:44.854256  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:51:44.877763  820640 pod_ready.go:92] pod "kube-apiserver-addons-20210812235029-820289" in "kube-system" namespace has status "Ready":"True"
	I0812 23:51:44.877783  820640 pod_ready.go:81] duration metric: took 132.252625ms waiting for pod "kube-apiserver-addons-20210812235029-820289" in "kube-system" namespace to be "Ready" ...
	I0812 23:51:44.877793  820640 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-20210812235029-820289" in "kube-system" namespace to be "Ready" ...
	I0812 23:51:45.020443  820640 pod_ready.go:92] pod "kube-controller-manager-addons-20210812235029-820289" in "kube-system" namespace has status "Ready":"True"
	I0812 23:51:45.020474  820640 pod_ready.go:81] duration metric: took 142.673555ms waiting for pod "kube-controller-manager-addons-20210812235029-820289" in "kube-system" namespace to be "Ready" ...
	I0812 23:51:45.020490  820640 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-d2mcw" in "kube-system" namespace to be "Ready" ...
	I0812 23:51:45.152944  820640 pod_ready.go:92] pod "kube-proxy-d2mcw" in "kube-system" namespace has status "Ready":"True"
	I0812 23:51:45.152967  820640 pod_ready.go:81] duration metric: took 132.468739ms waiting for pod "kube-proxy-d2mcw" in "kube-system" namespace to be "Ready" ...
	I0812 23:51:45.152977  820640 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-20210812235029-820289" in "kube-system" namespace to be "Ready" ...
	I0812 23:51:45.273423  820640 pod_ready.go:92] pod "kube-scheduler-addons-20210812235029-820289" in "kube-system" namespace has status "Ready":"True"
	I0812 23:51:45.273450  820640 pod_ready.go:81] duration metric: took 120.464875ms waiting for pod "kube-scheduler-addons-20210812235029-820289" in "kube-system" namespace to be "Ready" ...
	I0812 23:51:45.273461  820640 pod_ready.go:38] duration metric: took 8.427153476s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 23:51:45.273484  820640 api_server.go:50] waiting for apiserver process to appear ...
	I0812 23:51:45.273532  820640 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 23:51:45.344509  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:51:45.837904  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:51:45.927379  820640 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (9.292821453s)
	W0812 23:51:45.927419  820640 addons.go:296] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
	namespace/olm created
	namespace/operators created
	serviceaccount/olm-operator-serviceaccount created
	clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
	clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
	deployment.apps/olm-operator created
	deployment.apps/catalog-operator created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
	I0812 23:51:45.927452  820640 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
	namespace/olm created
	namespace/operators created
	serviceaccount/olm-operator-serviceaccount created
	clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
	clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
	deployment.apps/olm-operator created
	deployment.apps/catalog-operator created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
	I0812 23:51:45.927478  820640 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (9.252333095s)
	I0812 23:51:45.927527  820640 main.go:130] libmachine: Making call to close driver server
	I0812 23:51:45.927549  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .Close
	I0812 23:51:45.927569  820640 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.236196573s)
	I0812 23:51:45.927610  820640 main.go:130] libmachine: Making call to close driver server
	I0812 23:51:45.927622  820640 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (9.208170485s)
	I0812 23:51:45.927628  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .Close
	I0812 23:51:45.927646  820640 main.go:130] libmachine: Making call to close driver server
	I0812 23:51:45.927660  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .Close
	I0812 23:51:45.927727  820640 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (9.088421474s)
	I0812 23:51:45.927754  820640 start.go:736] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS
	I0812 23:51:45.927867  820640 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.844977836s)
	W0812 23:51:45.927902  820640 addons.go:296] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	I0812 23:51:45.927922  820640 retry.go:31] will retry after 360.127272ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	I0812 23:51:45.929060  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | Closing plugin on server side
	I0812 23:51:45.929064  820640 main.go:130] libmachine: Successfully made call to close driver server
	I0812 23:51:45.929091  820640 main.go:130] libmachine: Making call to close connection to plugin binary
	I0812 23:51:45.929101  820640 main.go:130] libmachine: Making call to close driver server
	I0812 23:51:45.929101  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | Closing plugin on server side
	I0812 23:51:45.929109  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .Close
	I0812 23:51:45.929063  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | Closing plugin on server side
	I0812 23:51:45.929205  820640 main.go:130] libmachine: Successfully made call to close driver server
	I0812 23:51:45.929224  820640 main.go:130] libmachine: Successfully made call to close driver server
	I0812 23:51:45.929251  820640 main.go:130] libmachine: Making call to close connection to plugin binary
	I0812 23:51:45.929267  820640 main.go:130] libmachine: Making call to close driver server
	I0812 23:51:45.929278  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .Close
	I0812 23:51:45.929339  820640 main.go:130] libmachine: Successfully made call to close driver server
	I0812 23:51:45.929379  820640 main.go:130] libmachine: Making call to close connection to plugin binary
	I0812 23:51:45.929230  820640 main.go:130] libmachine: Making call to close connection to plugin binary
	I0812 23:51:45.929406  820640 addons.go:313] Verifying addon registry=true in "addons-20210812235029-820289"
	I0812 23:51:45.929414  820640 main.go:130] libmachine: Making call to close driver server
	I0812 23:51:45.929430  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .Close
	I0812 23:51:45.929388  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | Closing plugin on server side
	I0812 23:51:45.929598  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | Closing plugin on server side
	I0812 23:51:45.929635  820640 main.go:130] libmachine: Successfully made call to close driver server
	I0812 23:51:45.929644  820640 main.go:130] libmachine: Making call to close connection to plugin binary
	I0812 23:51:45.929672  820640 main.go:130] libmachine: Successfully made call to close driver server
	I0812 23:51:45.929672  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | Closing plugin on server side
	I0812 23:51:45.929684  820640 main.go:130] libmachine: Making call to close connection to plugin binary
	I0812 23:51:45.929693  820640 addons.go:313] Verifying addon metrics-server=true in "addons-20210812235029-820289"
	I0812 23:51:45.931697  820640 out.go:177] * Verifying registry addon...
	I0812 23:51:45.933543  820640 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0812 23:51:45.983650  820640 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0812 23:51:45.983669  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:51:46.204597  820640 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
	I0812 23:51:46.288498  820640 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0812 23:51:46.302981  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:51:46.499384  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:51:46.844334  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:51:47.081871  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:51:47.311267  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:51:47.643961  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:51:47.836690  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:51:48.007571  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:51:48.075112  820640 ssh_runner.go:189] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.801552927s)
	I0812 23:51:48.075152  820640 api_server.go:70] duration metric: took 12.200388422s to wait for apiserver process to appear ...
	I0812 23:51:48.075168  820640 api_server.go:86] waiting for apiserver healthz status ...
	I0812 23:51:48.075181  820640 api_server.go:239] Checking apiserver healthz at https://192.168.39.112:8443/healthz ...
	I0812 23:51:48.076024  820640 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.45190855s)
	I0812 23:51:48.076083  820640 main.go:130] libmachine: Making call to close driver server
	I0812 23:51:48.076102  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .Close
	I0812 23:51:48.076482  820640 main.go:130] libmachine: Successfully made call to close driver server
	I0812 23:51:48.076503  820640 main.go:130] libmachine: Making call to close connection to plugin binary
	I0812 23:51:48.076506  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | Closing plugin on server side
	I0812 23:51:48.076520  820640 main.go:130] libmachine: Making call to close driver server
	I0812 23:51:48.076532  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .Close
	I0812 23:51:48.076756  820640 main.go:130] libmachine: Successfully made call to close driver server
	I0812 23:51:48.076774  820640 main.go:130] libmachine: Making call to close connection to plugin binary
	I0812 23:51:48.076786  820640 addons.go:313] Verifying addon csi-hostpath-driver=true in "addons-20210812235029-820289"
	I0812 23:51:48.078668  820640 out.go:177] * Verifying csi-hostpath-driver addon...
	I0812 23:51:48.080503  820640 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0812 23:51:48.094995  820640 api_server.go:265] https://192.168.39.112:8443/healthz returned 200:
	ok
	I0812 23:51:48.097232  820640 api_server.go:139] control plane version: v1.21.3
	I0812 23:51:48.097250  820640 api_server.go:129] duration metric: took 22.076452ms to wait for apiserver health ...
	I0812 23:51:48.097259  820640 system_pods.go:43] waiting for kube-system pods to appear ...
	I0812 23:51:48.110461  820640 kapi.go:86] Found 5 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0812 23:51:48.110480  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:51:48.120060  820640 system_pods.go:59] 18 kube-system pods found
	I0812 23:51:48.120100  820640 system_pods.go:61] "coredns-558bd4d5db-nrmhx" [d29d822d-395c-41b5-b0e2-da279389bbec] Running
	I0812 23:51:48.120111  820640 system_pods.go:61] "csi-hostpath-attacher-0" [fbe99b08-6cc5-45fa-802a-ed8b76703f42] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) didn't match pod affinity rules, 1 node(s) didn't match pod affinity/anti-affinity rules.)
	I0812 23:51:48.120122  820640 system_pods.go:61] "csi-hostpath-provisioner-0" [ce38132b-d4a1-4e70-9187-91249dc38ba3] Pending
	I0812 23:51:48.120128  820640 system_pods.go:61] "csi-hostpath-resizer-0" [94959391-6a7c-4fdd-b6fa-319bf0fa6f4d] Pending
	I0812 23:51:48.120144  820640 system_pods.go:61] "csi-hostpath-snapshotter-0" [33d2f4ea-81bc-4103-9744-2824d94e49bd] Pending
	I0812 23:51:48.120155  820640 system_pods.go:61] "csi-hostpathplugin-0" [1bcafcd4-14b3-4dfb-89ae-6b7404318001] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-agent csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-agent csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe])
	I0812 23:51:48.120174  820640 system_pods.go:61] "etcd-addons-20210812235029-820289" [8daece70-7a2d-4326-b7c0-30fb90200178] Running
	I0812 23:51:48.120189  820640 system_pods.go:61] "kube-apiserver-addons-20210812235029-820289" [0ea4384b-12bf-49f9-a894-1a2c4c49398c] Running
	I0812 23:51:48.120196  820640 system_pods.go:61] "kube-controller-manager-addons-20210812235029-820289" [22e5456f-7ddc-4ded-92fd-fa5595527097] Running
	I0812 23:51:48.120203  820640 system_pods.go:61] "kube-proxy-d2mcw" [042d754e-0e03-4d55-b93b-ef111d733617] Running
	I0812 23:51:48.120211  820640 system_pods.go:61] "kube-scheduler-addons-20210812235029-820289" [1c254483-3a27-4118-a801-60257a879d5c] Running
	I0812 23:51:48.120225  820640 system_pods.go:61] "metrics-server-77c99ccb96-r4lnv" [4dd7c87f-48e5-4ffe-bb56-0130751d4aac] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0812 23:51:48.120237  820640 system_pods.go:61] "registry-proxy-qb8vd" [c6fcdaae-c81a-490b-8cfa-b7dd428f45a4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0812 23:51:48.120260  820640 system_pods.go:61] "registry-z6f8h" [6005aae5-98e4-43e4-851a-f9a9aa55d491] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0812 23:51:48.120274  820640 system_pods.go:61] "snapshot-controller-989f9ddc8-cng96" [9fc07a33-5e77-4de7-abbb-2d9ac8387e67] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0812 23:51:48.120290  820640 system_pods.go:61] "snapshot-controller-989f9ddc8-x5hdt" [dd913b13-dd94-490c-9190-7228ddd50469] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0812 23:51:48.120304  820640 system_pods.go:61] "storage-provisioner" [8c90fe00-a4ae-4fe6-bf9e-0ec3765e7f67] Running
	I0812 23:51:48.120313  820640 system_pods.go:61] "tiller-deploy-768d69497-p5rbk" [32966f96-4891-49a3-86a8-cfc0e2266a9f] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0812 23:51:48.120324  820640 system_pods.go:74] duration metric: took 23.058987ms to wait for pod list to return data ...
	I0812 23:51:48.120336  820640 default_sa.go:34] waiting for default service account to be created ...
	I0812 23:51:48.128776  820640 default_sa.go:45] found service account: "default"
	I0812 23:51:48.128798  820640 default_sa.go:55] duration metric: took 8.454884ms for default service account to be created ...
	I0812 23:51:48.128808  820640 system_pods.go:116] waiting for k8s-apps to be running ...
	I0812 23:51:48.142496  820640 system_pods.go:86] 18 kube-system pods found
	I0812 23:51:48.142525  820640 system_pods.go:89] "coredns-558bd4d5db-nrmhx" [d29d822d-395c-41b5-b0e2-da279389bbec] Running
	I0812 23:51:48.142536  820640 system_pods.go:89] "csi-hostpath-attacher-0" [fbe99b08-6cc5-45fa-802a-ed8b76703f42] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) didn't match pod affinity rules, 1 node(s) didn't match pod affinity/anti-affinity rules.)
	I0812 23:51:48.142544  820640 system_pods.go:89] "csi-hostpath-provisioner-0" [ce38132b-d4a1-4e70-9187-91249dc38ba3] Pending
	I0812 23:51:48.142551  820640 system_pods.go:89] "csi-hostpath-resizer-0" [94959391-6a7c-4fdd-b6fa-319bf0fa6f4d] Pending
	I0812 23:51:48.142561  820640 system_pods.go:89] "csi-hostpath-snapshotter-0" [33d2f4ea-81bc-4103-9744-2824d94e49bd] Pending
	I0812 23:51:48.142569  820640 system_pods.go:89] "csi-hostpathplugin-0" [1bcafcd4-14b3-4dfb-89ae-6b7404318001] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-agent csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-agent csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe])
	I0812 23:51:48.142586  820640 system_pods.go:89] "etcd-addons-20210812235029-820289" [8daece70-7a2d-4326-b7c0-30fb90200178] Running
	I0812 23:51:48.142594  820640 system_pods.go:89] "kube-apiserver-addons-20210812235029-820289" [0ea4384b-12bf-49f9-a894-1a2c4c49398c] Running
	I0812 23:51:48.142601  820640 system_pods.go:89] "kube-controller-manager-addons-20210812235029-820289" [22e5456f-7ddc-4ded-92fd-fa5595527097] Running
	I0812 23:51:48.142611  820640 system_pods.go:89] "kube-proxy-d2mcw" [042d754e-0e03-4d55-b93b-ef111d733617] Running
	I0812 23:51:48.142617  820640 system_pods.go:89] "kube-scheduler-addons-20210812235029-820289" [1c254483-3a27-4118-a801-60257a879d5c] Running
	I0812 23:51:48.142629  820640 system_pods.go:89] "metrics-server-77c99ccb96-r4lnv" [4dd7c87f-48e5-4ffe-bb56-0130751d4aac] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0812 23:51:48.142641  820640 system_pods.go:89] "registry-proxy-qb8vd" [c6fcdaae-c81a-490b-8cfa-b7dd428f45a4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0812 23:51:48.142654  820640 system_pods.go:89] "registry-z6f8h" [6005aae5-98e4-43e4-851a-f9a9aa55d491] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0812 23:51:48.142667  820640 system_pods.go:89] "snapshot-controller-989f9ddc8-cng96" [9fc07a33-5e77-4de7-abbb-2d9ac8387e67] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0812 23:51:48.142683  820640 system_pods.go:89] "snapshot-controller-989f9ddc8-x5hdt" [dd913b13-dd94-490c-9190-7228ddd50469] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0812 23:51:48.142693  820640 system_pods.go:89] "storage-provisioner" [8c90fe00-a4ae-4fe6-bf9e-0ec3765e7f67] Running
	I0812 23:51:48.142703  820640 system_pods.go:89] "tiller-deploy-768d69497-p5rbk" [32966f96-4891-49a3-86a8-cfc0e2266a9f] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0812 23:51:48.142713  820640 system_pods.go:126] duration metric: took 13.899136ms to wait for k8s-apps to be running ...
	I0812 23:51:48.142725  820640 system_svc.go:44] waiting for kubelet service to be running ....
	I0812 23:51:48.142775  820640 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0812 23:51:48.287502  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:51:48.500740  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:51:48.620314  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:51:48.772455  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:51:48.993673  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:51:49.132528  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:51:49.271441  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:51:49.496837  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:51:49.617746  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:51:49.771722  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:51:49.998384  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:51:50.116421  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:51:50.271496  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:51:50.488549  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:51:50.617020  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:51:50.781893  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:51:51.018925  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:51:51.118279  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:51:51.276701  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:51:51.499496  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:51:51.694413  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:51:51.809606  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:51:51.841221  820640 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (5.63656665s)
	I0812 23:51:51.841302  820640 main.go:130] libmachine: Making call to close driver server
	I0812 23:51:51.841322  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .Close
	I0812 23:51:51.841368  820640 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.55282887s)
	I0812 23:51:51.841407  820640 main.go:130] libmachine: Making call to close driver server
	I0812 23:51:51.841428  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .Close
	I0812 23:51:51.841405  820640 ssh_runner.go:189] Completed: sudo systemctl is-active --quiet service kubelet: (3.698606756s)
	I0812 23:51:51.841534  820640 system_svc.go:56] duration metric: took 3.698800138s WaitForService to wait for kubelet.
	I0812 23:51:51.841565  820640 kubeadm.go:547] duration metric: took 15.966801445s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0812 23:51:51.841639  820640 node_conditions.go:102] verifying NodePressure condition ...
	I0812 23:51:51.841660  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | Closing plugin on server side
	I0812 23:51:51.841660  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | Closing plugin on server side
	I0812 23:51:51.841727  820640 main.go:130] libmachine: Successfully made call to close driver server
	I0812 23:51:51.841765  820640 main.go:130] libmachine: Making call to close connection to plugin binary
	I0812 23:51:51.841774  820640 main.go:130] libmachine: Making call to close driver server
	I0812 23:51:51.841789  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .Close
	I0812 23:51:51.841647  820640 main.go:130] libmachine: Successfully made call to close driver server
	I0812 23:51:51.841877  820640 main.go:130] libmachine: Making call to close connection to plugin binary
	I0812 23:51:51.841902  820640 main.go:130] libmachine: Making call to close driver server
	I0812 23:51:51.841911  820640 main.go:130] libmachine: (addons-20210812235029-820289) Calling .Close
	I0812 23:51:51.841996  820640 main.go:130] libmachine: Successfully made call to close driver server
	I0812 23:51:51.842038  820640 main.go:130] libmachine: Making call to close connection to plugin binary
	I0812 23:51:51.842016  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | Closing plugin on server side
	I0812 23:51:51.843191  820640 main.go:130] libmachine: (addons-20210812235029-820289) DBG | Closing plugin on server side
	I0812 23:51:51.843224  820640 main.go:130] libmachine: Successfully made call to close driver server
	I0812 23:51:51.843238  820640 main.go:130] libmachine: Making call to close connection to plugin binary
	I0812 23:51:51.850157  820640 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0812 23:51:51.850193  820640 node_conditions.go:123] node cpu capacity is 2
	I0812 23:51:51.850211  820640 node_conditions.go:105] duration metric: took 8.548616ms to run NodePressure ...
	I0812 23:51:51.850223  820640 start.go:231] waiting for startup goroutines ...
	I0812 23:51:51.992439  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:51:52.119874  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:51:52.272068  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:51:52.495826  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:51:52.618182  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:51:52.771015  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:51:52.991025  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:51:53.116746  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:51:53.272281  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:51:53.493562  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:51:53.617620  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:51:53.778739  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:51:53.991588  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:51:54.119813  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:51:54.276784  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:51:54.490418  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:51:54.618497  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:51:54.772194  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:51:54.990163  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:51:55.116478  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:51:55.270942  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:51:55.489407  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:51:55.618992  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:51:55.772652  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:51:55.988232  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:51:56.119031  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:51:56.277711  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:51:56.492524  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:51:56.616953  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:51:56.772274  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:51:56.989391  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:51:57.117062  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:51:57.271283  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:51:57.488408  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:51:57.634677  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:51:57.775485  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:51:57.990438  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:51:58.118721  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:51:58.271366  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:51:58.488755  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:51:58.618889  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:51:58.775202  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:51:59.018590  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:51:59.117757  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:51:59.270512  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:51:59.493800  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:51:59.618350  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:51:59.776244  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:51:59.990540  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:00.117038  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:00.273367  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:00.491084  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:01.342004  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:01.345281  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:01.345658  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:01.489127  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:01.616626  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:01.773034  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:01.988131  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:02.123639  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:02.271630  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:02.492186  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:02.617802  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:02.772876  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:02.989222  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:03.116940  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:03.271176  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:03.492119  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:03.618595  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:03.771739  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:03.990298  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:04.123361  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:04.278692  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:04.494302  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:04.617602  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:04.771817  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:04.993309  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:05.125168  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:05.271640  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:05.490373  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:05.616795  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:05.775005  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:05.988951  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:06.118479  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:06.270609  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:06.501178  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:06.626520  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:06.770896  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:06.998614  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:07.120406  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:07.290417  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:07.524873  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:07.627688  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:07.771289  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:07.993465  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:08.122211  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:09.050651  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:09.050761  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:09.054100  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:09.118385  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:09.274460  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:09.491277  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:09.616050  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:09.771069  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:09.990929  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:10.123284  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:10.271575  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:10.510759  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:10.621020  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:10.779435  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:10.990735  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:11.115690  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:11.271117  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:11.532949  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:11.629253  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:11.770600  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:11.994500  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:12.117782  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:12.271461  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:12.613265  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:12.621931  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:12.770707  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:12.994140  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:13.117611  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:13.270929  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:13.497012  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:13.617207  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:13.770344  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:13.988579  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:14.123006  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:14.273424  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:14.488835  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:14.617213  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:14.770647  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:14.988223  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:15.115698  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:15.271577  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:15.490501  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:15.616651  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:15.771019  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:15.988018  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:16.115346  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:16.270305  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:16.488499  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:16.617255  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:16.772533  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:16.988148  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:17.115785  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:17.270867  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:17.494288  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:17.616740  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:17.770427  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:17.988473  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:18.137457  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:18.272966  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:18.489868  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:18.615303  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:18.773210  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:18.988703  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:19.123215  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:19.270999  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:19.512740  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:19.643594  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:20.297788  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:20.300201  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:20.306993  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:20.488929  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:20.632540  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:20.770242  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:20.988536  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:21.116775  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:21.271030  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:21.489599  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:21.617175  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:21.772313  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:21.988649  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:22.116666  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:22.273153  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:22.491217  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:22.618112  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:22.771800  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:22.996511  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:23.123659  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:23.272760  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:23.491624  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:23.616679  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:23.770598  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:23.989706  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:24.116844  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:24.271276  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:24.491228  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:24.617125  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:24.774223  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:24.988208  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:25.122535  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:25.273283  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:25.490895  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:25.617847  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:25.770495  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:25.993346  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:26.126117  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:26.270273  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:26.496344  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:26.617086  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:26.774117  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:26.988064  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:27.117094  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:27.274717  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:27.489030  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:27.617185  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:27.772202  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:27.988045  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:28.115841  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:28.271063  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:28.487591  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:28.617901  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:28.769343  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:28.987688  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:29.118938  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:29.272445  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:29.487516  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:29.621932  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:29.770972  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:29.988047  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:30.116176  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:30.270247  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:30.678255  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:30.683454  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:30.788771  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:30.989515  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:31.117569  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:31.270035  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:31.493584  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:31.619653  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:31.771415  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:31.988107  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:32.116874  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:32.270038  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:32.487993  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:32.616587  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:32.772609  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:32.987629  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:33.120447  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:33.273260  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:33.489419  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:33.616751  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:33.770926  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:33.993282  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:34.115773  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:34.270698  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:34.499720  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:34.615035  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:34.773292  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:34.987335  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:35.116840  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:35.275889  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:35.490512  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:35.620941  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:35.771108  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:35.989559  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:36.121745  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:36.272178  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:36.488028  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:36.616818  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:36.769925  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:36.988901  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:37.117155  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:37.271809  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:37.490331  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:37.617020  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:37.771170  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:37.989482  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:38.126848  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:38.272131  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:38.490441  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:38.618455  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:38.769932  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:38.989704  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:39.122608  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:39.283424  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:39.489028  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:39.617133  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:39.770333  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:39.988909  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:40.117927  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:40.270261  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:40.488023  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:40.628897  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:40.770352  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:40.987980  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:41.123897  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:41.271650  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:41.488892  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:41.648848  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:41.770606  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:41.988557  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:42.120919  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:42.271879  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:42.498786  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:42.622072  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:42.771503  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:42.989331  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:52:43.127496  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:43.274511  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:43.488602  820640 kapi.go:108] duration metric: took 57.555049776s to wait for kubernetes.io/minikube-addons=registry ...
	I0812 23:52:43.616550  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:43.770414  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:44.115950  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:44.271124  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:44.616791  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:44.770847  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:45.116570  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:45.275187  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:45.617130  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:45.770673  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:46.149248  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:46.272562  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:46.617508  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:46.771722  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:47.116890  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:47.288717  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:47.618752  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:47.771080  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:48.122668  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:48.277066  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:48.624290  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:48.772068  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:49.119209  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:49.271255  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:49.622910  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:49.770900  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:50.117592  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:50.270191  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:50.621303  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:50.771309  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:51.115948  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:51.269935  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:51.617212  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:51.777065  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:52.115648  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:52.270966  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:52.616691  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:53.219720  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:53.220405  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:53.271266  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:53.619290  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:53.770679  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:54.118341  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:54.270807  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:54.621885  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:55.033342  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:55.120955  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:55.272742  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:55.620781  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:55.772052  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:56.117169  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:56.274243  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:56.621049  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:56.777031  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:57.118684  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:57.270906  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:57.621377  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:57.772387  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:58.119302  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:58.273047  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:58.624970  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:58.771846  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:59.116425  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:59.270346  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:52:59.617257  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:52:59.778751  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:53:00.147878  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:53:00.286335  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:53:00.640930  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:53:00.778133  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:53:01.134179  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:53:01.297596  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:53:01.620836  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:53:01.785718  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:53:02.120936  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:53:02.270101  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:53:02.616817  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:53:02.771345  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:53:03.136306  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:53:03.270471  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:53:03.616695  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:53:03.769989  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:53:04.311116  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:53:04.311661  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:53:04.669566  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:53:04.769855  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:53:05.116131  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:53:05.270226  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:53:05.617596  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:53:05.771223  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:53:06.116973  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:53:06.271328  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:53:06.619833  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:53:06.772260  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:53:07.128977  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:53:07.277606  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:53:07.619647  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:53:07.776499  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:53:08.129754  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:53:08.272793  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:53:08.625131  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:53:08.776345  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:53:09.123941  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:53:09.282973  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:53:09.905946  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:53:09.905965  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:53:10.123275  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:53:10.272230  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:53:10.629520  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:53:10.771696  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:53:11.118483  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:53:11.272850  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:53:11.617513  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:53:11.771408  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:53:12.118082  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:53:12.271637  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:53:12.618992  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:53:12.771966  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:53:13.116930  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:53:13.272423  820640 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:53:13.627862  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:53:13.771511  820640 kapi.go:108] duration metric: took 1m32.553806222s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0812 23:53:14.124491  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:53:14.617652  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:53:15.117812  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:53:15.625906  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:53:16.135855  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:53:16.617205  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:53:17.119233  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:53:17.617649  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:53:18.118685  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:53:18.625509  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:53:19.118498  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:53:19.618579  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:53:20.123798  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:53:21.188687  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:53:21.619973  820640 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:53:22.120849  820640 kapi.go:108] duration metric: took 1m34.040340662s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0812 23:53:22.122847  820640 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, helm-tiller, metrics-server, volumesnapshots, olm, registry, ingress, csi-hostpath-driver
	I0812 23:53:22.122878  820640 addons.go:344] enableAddons completed in 1m46.248091079s
	I0812 23:53:22.167865  820640 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0812 23:53:22.169744  820640 out.go:177] * Done! kubectl is now configured to use "addons-20210812235029-820289" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Thu 2021-08-12 23:50:40 UTC, end at Thu 2021-08-12 23:58:19 UTC. --
	Aug 12 23:58:19 addons-20210812235029-820289 crio[2077]: time="2021-08-12 23:58:19.723609733Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a623f658-bc1a-44fa-8916-9699db4400f3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 12 23:58:19 addons-20210812235029-820289 crio[2077]: time="2021-08-12 23:58:19.725114035Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:24ca02020cbbcefbb4301721f21a1ab5f5f0d233591bff87384e21357f681688,PodSandboxId:ff8260447e181fa4ff44a2bd7a9d5f2f0944233b75334498e6a52e3bcb460218,Metadata:&ContainerMetadata{Name:etcd-restore-operator,Attempt:0,},Image:&ImageSpec{Image:9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628812496407685607,Labels:map[string]string{io.kubernetes.container.name: etcd-restore-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-c5jdd,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: a251dd29-51a3-45fd-aa38-d910126913ea,},Annotations:map[string]string{io.kubernetes.container.hash: 6a3106c4,io.kubernetes.container.restartCoun
t: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d04086fbc6f40fc45bf4ec039228b8bf21b6aee8665969cb4eca57451130a7b,PodSandboxId:ff8260447e181fa4ff44a2bd7a9d5f2f0944233b75334498e6a52e3bcb460218,Metadata:&ContainerMetadata{Name:etcd-backup-operator,Attempt:0,},Image:&ImageSpec{Image:9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628812495914374041,Labels:map[string]string{io.kubernetes.container.name: etcd-backup-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-c5jdd,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: a251dd29-51a3-45fd-aa38-d910126913ea,},Annotations:map[string]string{io.kubernetes.container.hash: e9142ec4,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ccfe0fabcfc227a5f5d8af68a480147d136b0325a3171fae86f0500ead9e2e7,PodSandboxId:ff8260447e181fa4ff44a2bd7a9d5f2f0944233b75334498e6a52e3bcb460218,Metadata:&ContainerMetadata{Name:etcd-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628812495137946973,Labels:map[string]string{io.kubernetes.container.name: etcd-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-c5jdd,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: a251dd29-51a3-45fd-aa38-d910126913ea,},Annotations:map[string]string{io.kubernetes.container.hash: a4475c14,io.kubernetes.conta
iner.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26ebec6a37c253bc83dea77a39b63637ac39751a5a7db0f933e5ff9145716f62,PodSandboxId:6cfdc1bddbde4925f23c205676b88612dcba769396b29a83aaf0c37cc10779d8,Metadata:&ContainerMetadata{Name:private-image-eu,Attempt:0,},Image:&ImageSpec{Image:europe-west1-docker.pkg.dev/k8s-minikube/test-artifacts-eu/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,Annotations:map[string]string{},},ImageRef:europe-west1-docker.pkg.dev/k8s-minikube/test-artifacts-eu/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,State:CONTAINER_RUNNING,CreatedAt:1628812486994333072,Labels:map[string]string{io.kubernetes.container.name: private-image-eu,io.kubernetes.pod.name: private-image-eu-5956d58f9f-qhzdr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: abed0c04-d2a3-4c5
0-899b-16a1db4432a3,},Annotations:map[string]string{io.kubernetes.container.hash: 75354c1d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:589183e860536b37198eebf41ddd7769db7d1722c08aa9cae51ca788603fe806,PodSandboxId:846a34bd0f2a21c414651bc1fb00563e9eb01b1e8b0f7b9c3ef0c5c23cc13204,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce,State:CONTAINER_RUNNING,CreatedAt:1628812467777458064,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7419f352-61f4-4cd1-b1a1-5f2622b3f293,},Annotation
s:map[string]string{io.kubernetes.container.hash: 439f59e2,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:878beb2424135ce004d37826518b3ced2d8a28b834227a0530524c2602b7a393,PodSandboxId:c3feaa6d0f2117a4a7528982f8956a5ab57388301f49a42ccdba7c9aa99636b0,Metadata:&ContainerMetadata{Name:private-image,Attempt:0,},Image:&ImageSpec{Image:us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,Annotations:map[string]string{},},ImageRef:us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,State:CONTAINER_RUNNING,CreatedAt:1628812463068710000,Labels:map[string]string{io.kubernetes.container.name: private-image,io.kubernete
s.pod.name: private-image-7ff9c8c74f-vrcqj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4c8cd4c-3a3d-4c12-8863-a7209f060be2,},Annotations:map[string]string{io.kubernetes.container.hash: c7e432c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfd76cbf91426603b0d7b370f45186bfc7349cb6acd625e74b50f2c97d6d5462,PodSandboxId:4a53b00e3fd62a312a1ece65ba5f7032a766e056449b25d85439b13677c59f00,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,State:CONTAINER_RUNNING,CreatedAt:1628812434394331053,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernet
es.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 32b4ebc6-1950-4af1-9d15-bd7d8ddbe52c,},Annotations:map[string]string{io.kubernetes.container.hash: 85f81115,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79637d5190536c29c355e710c67cd16200301bac18136d33c0a8a5e5f866b76f,PodSandboxId:13b1d5e59e433a1225a5f85815fe59a76e6ff63588e9f368688dc2351f0631b5,Metadata:&ContainerMetadata{Name:packageserver,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628812375743364158,Labels:map[string]string{io.kubernetes.container.name: packageserver,io.kubernet
es.pod.name: packageserver-6488c6c757-lmf89,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 3f0a6c54-7925-430d-ab5f-a01d61a5edcb,},Annotations:map[string]string{io.kubernetes.container.hash: 42aad7cc,io.kubernetes.container.ports: [{\"containerPort\":5443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5e00a6b90a5271ce54b61377e461e4d88a19fd90dc06863d5a746577334ad1,PodSandboxId:162cbd30544c84d70345af95f654c7514300ee44f63755af89d98438e101376c,Metadata:&ContainerMetadata{Name:packageserver,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUN
NING,CreatedAt:1628812375330470665,Labels:map[string]string{io.kubernetes.container.name: packageserver,io.kubernetes.pod.name: packageserver-6488c6c757-ncn95,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: d1fe5f4e-801b-43bb-8d91-f98278b9b7e0,},Annotations:map[string]string{io.kubernetes.container.hash: 1ed90569,io.kubernetes.container.ports: [{\"containerPort\":5443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83fae35060ce317927df9c26e190d57130d6bf5ca320082e659c93d859772586,PodSandboxId:6a3b5b8a7e1c5939406c4754ff6db153de573b3ca6860cdad0a07cb06c3ff361,Metadata:&ContainerMetadata{Name:registry-server,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0,Annotations:map[string]
string{},},ImageRef:quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0,State:CONTAINER_RUNNING,CreatedAt:1628812374485600354,Labels:map[string]string{io.kubernetes.container.name: registry-server,io.kubernetes.pod.name: operatorhubio-catalog-lsjpz,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 2088d6f1-a488-4bbd-be2e-5a3f6482ffd3,},Annotations:map[string]string{io.kubernetes.container.hash: a6fa37,io.kubernetes.container.ports: [{\"name\":\"grpc\",\"containerPort\":50051,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d73d646852fe4aa976ce0bd39287080be283652ecd5611a35d72cd03dfa3b2,PodSandboxId:fba557957f59a84847ff0a75ecba6ae1dad057e8e2caabae2f5f455c1ff052de,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{I
mage:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1628812331035309077,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-6dngl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4ba67514-8f40-41b0-ad6f-3db817ed7f3e,},Annotations:map[string]string{io.kubernetes.container.hash: 424b9e3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c992643e54f96cd3ce9001ef74b8bbf6dda8febaaebe4496e8614586458fb9f8,PodSandboxId:0bba02f5b4bb2245a2fdd76ea6d06a9740e7c8570d360ea5636fa4644152e896,Metadata:&ContainerMetadata{Name:cre
ate,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1628812330084737828,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jswnd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 35929738-f6a6-4263-91dd-5e83b165fc66,},Annotations:map[string]string{io.kubernetes.container.hash: 784a9392,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47de47c596fe3e327aa94468d4ee52f31f4cc9eed74ced1c79468444052c72b3,PodSandboxId:72c70654465a7b64a9a596f260c12513b66f36d39aaf6c5338f070a68af7ee27,
Metadata:&ContainerMetadata{Name:olm-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628812324661923427,Labels:map[string]string{io.kubernetes.container.name: olm-operator,io.kubernetes.pod.name: olm-operator-859c88c96-vsdlp,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 70ea73ed-8703-41ef-8b44-4383788352e1,},Annotations:map[string]string{io.kubernetes.container.hash: 456d2455,io.kubernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":8081,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminatio
nGracePeriod: 30,},},&Container{Id:3cc66b29ae716721d7cfc70ab42c7883f2440ff9da5a55396d21d971ec76e878,PodSandboxId:c3dbb81d724b8247891b3fd6a1b98092c6ddc807e400c3e1e72fde8a5842ecea,Metadata:&ContainerMetadata{Name:catalog-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628812324027304855,Labels:map[string]string{io.kubernetes.container.name: catalog-operator,io.kubernetes.pod.name: catalog-operator-75d496484d-x22rr,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 9ef3c8dc-b0a8-41e1-a9be-ad8ff5a7cd49,},Annotations:map[string]string{io.kubernetes.container.hash: 22ab0afe,io.kubernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":8081,\"protocol\":\"TCP\"}],io.kubernetes.cont
ainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b83db69e2b1957cd9c27a10214e1e150f3ebf424abed5d0d8af9e975dd330c37,PodSandboxId:3523d49ad00e35dbcb993902231434cb2254feae748306a826d2e9ecf43d6807,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628812299895268336,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c90fe00-a4ae-4fe6-bf9e-0ec3765e7f67,},Annotations:map[string]string{io.kubernetes.container.hash: 9d6a3819,io
.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:144d0dbab7b4bb72eeb618430d51609f43dcea9e62c5c5c6cfc53589202468dc,PodSandboxId:4bcf0afaab52b40716bd095cd300edd339f0fe0eca9524deb4f9189e4f3719e8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628812297745577752,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d2mcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 042d754e-0e03-4d55-b93b-ef111d733617,},Annotations:map[string]string{io.kubernetes.container.hash: 61224e33,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:252ee0cdeea822fe191a791748af57151814b3174fe9d7588e8b35af8180baa5,PodSandboxId:9a14ebd4bb39be8b1a2c719112f54e151d667bc4600adcc52bdb2e171863be40,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628812296301812126,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-nrmhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d29d822d-395c-41b5-b0e2-da279389bbec,},Annotations:map[string]string{io.kubernetes.container.hash: e8530073,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:832c0a74513bf5368b3dddeaa34591b32693c3d659a90e894db641d30945a94e,PodSandboxId:c8eddf71fb45c8b895a0205e186713a3cbfd052751758142deae5200187b79c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628812273986650559,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-20210812235029-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 1641765115f1c1317b2417660238322a,},Annotations:map[string]string{io.kubernetes.container.hash: 17a239e8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b610af4737f78f6e81098816b4d4bb32edad7e272eff91eedab1e1228915246,PodSandboxId:d414c06f351ebe53bbd8956c461fdef373a43169401904dfb8707770ac30753a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628812273685167504,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-20210812235029-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 002
18d7ecf1ada3625b3f1636c3a79de,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df7df43ae185dc0ace9a7e00d73739e38efa52af36ac1dd82116971a026fb789,PodSandboxId:8cfd5098d1c279eb09d72d1589c1c302e7a8df133b2650335202de95d57f8730,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628812273619122504,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-20210812235029-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914178d72
dc0b528da5ffaf8d8c376af,},Annotations:map[string]string{io.kubernetes.container.hash: 9cfa5d4f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d1f01676b8baf312ddbcf8715f52aa8544b29bb3b38375a6f3fa993969901fb,PodSandboxId:53e33974fb6735e31179ccc48bba01c3602647c854a1ba52ad1486f2dfc26a5e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628812273106325676,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-20210812235029-820289,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 143e03afe6ebd93d6ba969540d8c9889,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a623f658-bc1a-44fa-8916-9699db4400f3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 12 23:58:19 addons-20210812235029-820289 crio[2077]: time="2021-08-12 23:58:19.741256178Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b3627170-0fdd-44c2-afc0-0596f3b260eb name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 12 23:58:19 addons-20210812235029-820289 crio[2077]: time="2021-08-12 23:58:19.741304878Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b3627170-0fdd-44c2-afc0-0596f3b260eb name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 12 23:58:19 addons-20210812235029-820289 crio[2077]: time="2021-08-12 23:58:19.741735279Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:24ca02020cbbcefbb4301721f21a1ab5f5f0d233591bff87384e21357f681688,PodSandboxId:ff8260447e181fa4ff44a2bd7a9d5f2f0944233b75334498e6a52e3bcb460218,Metadata:&ContainerMetadata{Name:etcd-restore-operator,Attempt:0,},Image:&ImageSpec{Image:9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628812496407685607,Labels:map[string]string{io.kubernetes.container.name: etcd-restore-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-c5jdd,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: a251dd29-51a3-45fd-aa38-d910126913ea,},Annotations:map[string]string{io.kubernetes.container.hash: 6a3106c4,io.kubernetes.container.restartCoun
t: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d04086fbc6f40fc45bf4ec039228b8bf21b6aee8665969cb4eca57451130a7b,PodSandboxId:ff8260447e181fa4ff44a2bd7a9d5f2f0944233b75334498e6a52e3bcb460218,Metadata:&ContainerMetadata{Name:etcd-backup-operator,Attempt:0,},Image:&ImageSpec{Image:9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628812495914374041,Labels:map[string]string{io.kubernetes.container.name: etcd-backup-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-c5jdd,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: a251dd29-51a3-45fd-aa38-d910126913ea,},Annotations:map[string]string{io.kubernetes.container.hash: e9142ec4,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ccfe0fabcfc227a5f5d8af68a480147d136b0325a3171fae86f0500ead9e2e7,PodSandboxId:ff8260447e181fa4ff44a2bd7a9d5f2f0944233b75334498e6a52e3bcb460218,Metadata:&ContainerMetadata{Name:etcd-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628812495137946973,Labels:map[string]string{io.kubernetes.container.name: etcd-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-c5jdd,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: a251dd29-51a3-45fd-aa38-d910126913ea,},Annotations:map[string]string{io.kubernetes.container.hash: a4475c14,io.kubernetes.conta
iner.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26ebec6a37c253bc83dea77a39b63637ac39751a5a7db0f933e5ff9145716f62,PodSandboxId:6cfdc1bddbde4925f23c205676b88612dcba769396b29a83aaf0c37cc10779d8,Metadata:&ContainerMetadata{Name:private-image-eu,Attempt:0,},Image:&ImageSpec{Image:europe-west1-docker.pkg.dev/k8s-minikube/test-artifacts-eu/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,Annotations:map[string]string{},},ImageRef:europe-west1-docker.pkg.dev/k8s-minikube/test-artifacts-eu/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,State:CONTAINER_RUNNING,CreatedAt:1628812486994333072,Labels:map[string]string{io.kubernetes.container.name: private-image-eu,io.kubernetes.pod.name: private-image-eu-5956d58f9f-qhzdr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: abed0c04-d2a3-4c5
0-899b-16a1db4432a3,},Annotations:map[string]string{io.kubernetes.container.hash: 75354c1d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:589183e860536b37198eebf41ddd7769db7d1722c08aa9cae51ca788603fe806,PodSandboxId:846a34bd0f2a21c414651bc1fb00563e9eb01b1e8b0f7b9c3ef0c5c23cc13204,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce,State:CONTAINER_RUNNING,CreatedAt:1628812467777458064,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7419f352-61f4-4cd1-b1a1-5f2622b3f293,},Annotation
s:map[string]string{io.kubernetes.container.hash: 439f59e2,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:878beb2424135ce004d37826518b3ced2d8a28b834227a0530524c2602b7a393,PodSandboxId:c3feaa6d0f2117a4a7528982f8956a5ab57388301f49a42ccdba7c9aa99636b0,Metadata:&ContainerMetadata{Name:private-image,Attempt:0,},Image:&ImageSpec{Image:us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,Annotations:map[string]string{},},ImageRef:us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,State:CONTAINER_RUNNING,CreatedAt:1628812463068710000,Labels:map[string]string{io.kubernetes.container.name: private-image,io.kubernete
s.pod.name: private-image-7ff9c8c74f-vrcqj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4c8cd4c-3a3d-4c12-8863-a7209f060be2,},Annotations:map[string]string{io.kubernetes.container.hash: c7e432c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfd76cbf91426603b0d7b370f45186bfc7349cb6acd625e74b50f2c97d6d5462,PodSandboxId:4a53b00e3fd62a312a1ece65ba5f7032a766e056449b25d85439b13677c59f00,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,State:CONTAINER_RUNNING,CreatedAt:1628812434394331053,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernet
es.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 32b4ebc6-1950-4af1-9d15-bd7d8ddbe52c,},Annotations:map[string]string{io.kubernetes.container.hash: 85f81115,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79637d5190536c29c355e710c67cd16200301bac18136d33c0a8a5e5f866b76f,PodSandboxId:13b1d5e59e433a1225a5f85815fe59a76e6ff63588e9f368688dc2351f0631b5,Metadata:&ContainerMetadata{Name:packageserver,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628812375743364158,Labels:map[string]string{io.kubernetes.container.name: packageserver,io.kubernet
es.pod.name: packageserver-6488c6c757-lmf89,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 3f0a6c54-7925-430d-ab5f-a01d61a5edcb,},Annotations:map[string]string{io.kubernetes.container.hash: 42aad7cc,io.kubernetes.container.ports: [{\"containerPort\":5443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5e00a6b90a5271ce54b61377e461e4d88a19fd90dc06863d5a746577334ad1,PodSandboxId:162cbd30544c84d70345af95f654c7514300ee44f63755af89d98438e101376c,Metadata:&ContainerMetadata{Name:packageserver,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUN
NING,CreatedAt:1628812375330470665,Labels:map[string]string{io.kubernetes.container.name: packageserver,io.kubernetes.pod.name: packageserver-6488c6c757-ncn95,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: d1fe5f4e-801b-43bb-8d91-f98278b9b7e0,},Annotations:map[string]string{io.kubernetes.container.hash: 1ed90569,io.kubernetes.container.ports: [{\"containerPort\":5443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83fae35060ce317927df9c26e190d57130d6bf5ca320082e659c93d859772586,PodSandboxId:6a3b5b8a7e1c5939406c4754ff6db153de573b3ca6860cdad0a07cb06c3ff361,Metadata:&ContainerMetadata{Name:registry-server,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0,Annotations:map[string]
string{},},ImageRef:quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0,State:CONTAINER_RUNNING,CreatedAt:1628812374485600354,Labels:map[string]string{io.kubernetes.container.name: registry-server,io.kubernetes.pod.name: operatorhubio-catalog-lsjpz,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 2088d6f1-a488-4bbd-be2e-5a3f6482ffd3,},Annotations:map[string]string{io.kubernetes.container.hash: a6fa37,io.kubernetes.container.ports: [{\"name\":\"grpc\",\"containerPort\":50051,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d73d646852fe4aa976ce0bd39287080be283652ecd5611a35d72cd03dfa3b2,PodSandboxId:fba557957f59a84847ff0a75ecba6ae1dad057e8e2caabae2f5f455c1ff052de,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{I
mage:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1628812331035309077,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-6dngl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4ba67514-8f40-41b0-ad6f-3db817ed7f3e,},Annotations:map[string]string{io.kubernetes.container.hash: 424b9e3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c992643e54f96cd3ce9001ef74b8bbf6dda8febaaebe4496e8614586458fb9f8,PodSandboxId:0bba02f5b4bb2245a2fdd76ea6d06a9740e7c8570d360ea5636fa4644152e896,Metadata:&ContainerMetadata{Name:cre
ate,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1628812330084737828,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jswnd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 35929738-f6a6-4263-91dd-5e83b165fc66,},Annotations:map[string]string{io.kubernetes.container.hash: 784a9392,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47de47c596fe3e327aa94468d4ee52f31f4cc9eed74ced1c79468444052c72b3,PodSandboxId:72c70654465a7b64a9a596f260c12513b66f36d39aaf6c5338f070a68af7ee27,
Metadata:&ContainerMetadata{Name:olm-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628812324661923427,Labels:map[string]string{io.kubernetes.container.name: olm-operator,io.kubernetes.pod.name: olm-operator-859c88c96-vsdlp,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 70ea73ed-8703-41ef-8b44-4383788352e1,},Annotations:map[string]string{io.kubernetes.container.hash: 456d2455,io.kubernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":8081,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminatio
nGracePeriod: 30,},},&Container{Id:3cc66b29ae716721d7cfc70ab42c7883f2440ff9da5a55396d21d971ec76e878,PodSandboxId:c3dbb81d724b8247891b3fd6a1b98092c6ddc807e400c3e1e72fde8a5842ecea,Metadata:&ContainerMetadata{Name:catalog-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628812324027304855,Labels:map[string]string{io.kubernetes.container.name: catalog-operator,io.kubernetes.pod.name: catalog-operator-75d496484d-x22rr,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 9ef3c8dc-b0a8-41e1-a9be-ad8ff5a7cd49,},Annotations:map[string]string{io.kubernetes.container.hash: 22ab0afe,io.kubernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":8081,\"protocol\":\"TCP\"}],io.kubernetes.cont
ainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b83db69e2b1957cd9c27a10214e1e150f3ebf424abed5d0d8af9e975dd330c37,PodSandboxId:3523d49ad00e35dbcb993902231434cb2254feae748306a826d2e9ecf43d6807,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628812299895268336,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c90fe00-a4ae-4fe6-bf9e-0ec3765e7f67,},Annotations:map[string]string{io.kubernetes.container.hash: 9d6a3819,io
.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:144d0dbab7b4bb72eeb618430d51609f43dcea9e62c5c5c6cfc53589202468dc,PodSandboxId:4bcf0afaab52b40716bd095cd300edd339f0fe0eca9524deb4f9189e4f3719e8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628812297745577752,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d2mcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 042d754e-0e03-4d55-b93b-ef111d733617,},Annotations:map[string]string{io.kubernetes.container.hash: 61224e33,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:252ee0cdeea822fe191a791748af57151814b3174fe9d7588e8b35af8180baa5,PodSandboxId:9a14ebd4bb39be8b1a2c719112f54e151d667bc4600adcc52bdb2e171863be40,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628812296301812126,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-nrmhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d29d822d-395c-41b5-b0e2-da279389bbec,},Annotations:map[string]string{io.kubernetes.container.hash: e8530073,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:832c0a74513bf5368b3dddeaa34591b32693c3d659a90e894db641d30945a94e,PodSandboxId:c8eddf71fb45c8b895a0205e186713a3cbfd052751758142deae5200187b79c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628812273986650559,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-20210812235029-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 1641765115f1c1317b2417660238322a,},Annotations:map[string]string{io.kubernetes.container.hash: 17a239e8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b610af4737f78f6e81098816b4d4bb32edad7e272eff91eedab1e1228915246,PodSandboxId:d414c06f351ebe53bbd8956c461fdef373a43169401904dfb8707770ac30753a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628812273685167504,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-20210812235029-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 002
18d7ecf1ada3625b3f1636c3a79de,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df7df43ae185dc0ace9a7e00d73739e38efa52af36ac1dd82116971a026fb789,PodSandboxId:8cfd5098d1c279eb09d72d1589c1c302e7a8df133b2650335202de95d57f8730,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628812273619122504,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-20210812235029-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914178d72
dc0b528da5ffaf8d8c376af,},Annotations:map[string]string{io.kubernetes.container.hash: 9cfa5d4f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d1f01676b8baf312ddbcf8715f52aa8544b29bb3b38375a6f3fa993969901fb,PodSandboxId:53e33974fb6735e31179ccc48bba01c3602647c854a1ba52ad1486f2dfc26a5e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628812273106325676,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-20210812235029-820289,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 143e03afe6ebd93d6ba969540d8c9889,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b3627170-0fdd-44c2-afc0-0596f3b260eb name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 12 23:58:19 addons-20210812235029-820289 crio[2077]: time="2021-08-12 23:58:19.790081153Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=803b850a-90ca-46bd-a1e6-83597b7a130b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 12 23:58:19 addons-20210812235029-820289 crio[2077]: time="2021-08-12 23:58:19.790217027Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=803b850a-90ca-46bd-a1e6-83597b7a130b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 12 23:58:19 addons-20210812235029-820289 crio[2077]: time="2021-08-12 23:58:19.790759963Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:24ca02020cbbcefbb4301721f21a1ab5f5f0d233591bff87384e21357f681688,PodSandboxId:ff8260447e181fa4ff44a2bd7a9d5f2f0944233b75334498e6a52e3bcb460218,Metadata:&ContainerMetadata{Name:etcd-restore-operator,Attempt:0,},Image:&ImageSpec{Image:9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628812496407685607,Labels:map[string]string{io.kubernetes.container.name: etcd-restore-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-c5jdd,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: a251dd29-51a3-45fd-aa38-d910126913ea,},Annotations:map[string]string{io.kubernetes.container.hash: 6a3106c4,io.kubernetes.container.restartCoun
t: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d04086fbc6f40fc45bf4ec039228b8bf21b6aee8665969cb4eca57451130a7b,PodSandboxId:ff8260447e181fa4ff44a2bd7a9d5f2f0944233b75334498e6a52e3bcb460218,Metadata:&ContainerMetadata{Name:etcd-backup-operator,Attempt:0,},Image:&ImageSpec{Image:9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628812495914374041,Labels:map[string]string{io.kubernetes.container.name: etcd-backup-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-c5jdd,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: a251dd29-51a3-45fd-aa38-d910126913ea,},Annotations:map[string]string{io.kubernetes.container.hash: e9142ec4,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ccfe0fabcfc227a5f5d8af68a480147d136b0325a3171fae86f0500ead9e2e7,PodSandboxId:ff8260447e181fa4ff44a2bd7a9d5f2f0944233b75334498e6a52e3bcb460218,Metadata:&ContainerMetadata{Name:etcd-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628812495137946973,Labels:map[string]string{io.kubernetes.container.name: etcd-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-c5jdd,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: a251dd29-51a3-45fd-aa38-d910126913ea,},Annotations:map[string]string{io.kubernetes.container.hash: a4475c14,io.kubernetes.conta
iner.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26ebec6a37c253bc83dea77a39b63637ac39751a5a7db0f933e5ff9145716f62,PodSandboxId:6cfdc1bddbde4925f23c205676b88612dcba769396b29a83aaf0c37cc10779d8,Metadata:&ContainerMetadata{Name:private-image-eu,Attempt:0,},Image:&ImageSpec{Image:europe-west1-docker.pkg.dev/k8s-minikube/test-artifacts-eu/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,Annotations:map[string]string{},},ImageRef:europe-west1-docker.pkg.dev/k8s-minikube/test-artifacts-eu/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,State:CONTAINER_RUNNING,CreatedAt:1628812486994333072,Labels:map[string]string{io.kubernetes.container.name: private-image-eu,io.kubernetes.pod.name: private-image-eu-5956d58f9f-qhzdr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: abed0c04-d2a3-4c5
0-899b-16a1db4432a3,},Annotations:map[string]string{io.kubernetes.container.hash: 75354c1d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:589183e860536b37198eebf41ddd7769db7d1722c08aa9cae51ca788603fe806,PodSandboxId:846a34bd0f2a21c414651bc1fb00563e9eb01b1e8b0f7b9c3ef0c5c23cc13204,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce,State:CONTAINER_RUNNING,CreatedAt:1628812467777458064,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7419f352-61f4-4cd1-b1a1-5f2622b3f293,},Annotation
s:map[string]string{io.kubernetes.container.hash: 439f59e2,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:878beb2424135ce004d37826518b3ced2d8a28b834227a0530524c2602b7a393,PodSandboxId:c3feaa6d0f2117a4a7528982f8956a5ab57388301f49a42ccdba7c9aa99636b0,Metadata:&ContainerMetadata{Name:private-image,Attempt:0,},Image:&ImageSpec{Image:us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,Annotations:map[string]string{},},ImageRef:us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,State:CONTAINER_RUNNING,CreatedAt:1628812463068710000,Labels:map[string]string{io.kubernetes.container.name: private-image,io.kubernete
s.pod.name: private-image-7ff9c8c74f-vrcqj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4c8cd4c-3a3d-4c12-8863-a7209f060be2,},Annotations:map[string]string{io.kubernetes.container.hash: c7e432c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfd76cbf91426603b0d7b370f45186bfc7349cb6acd625e74b50f2c97d6d5462,PodSandboxId:4a53b00e3fd62a312a1ece65ba5f7032a766e056449b25d85439b13677c59f00,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,State:CONTAINER_RUNNING,CreatedAt:1628812434394331053,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernet
es.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 32b4ebc6-1950-4af1-9d15-bd7d8ddbe52c,},Annotations:map[string]string{io.kubernetes.container.hash: 85f81115,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79637d5190536c29c355e710c67cd16200301bac18136d33c0a8a5e5f866b76f,PodSandboxId:13b1d5e59e433a1225a5f85815fe59a76e6ff63588e9f368688dc2351f0631b5,Metadata:&ContainerMetadata{Name:packageserver,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628812375743364158,Labels:map[string]string{io.kubernetes.container.name: packageserver,io.kubernet
es.pod.name: packageserver-6488c6c757-lmf89,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 3f0a6c54-7925-430d-ab5f-a01d61a5edcb,},Annotations:map[string]string{io.kubernetes.container.hash: 42aad7cc,io.kubernetes.container.ports: [{\"containerPort\":5443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5e00a6b90a5271ce54b61377e461e4d88a19fd90dc06863d5a746577334ad1,PodSandboxId:162cbd30544c84d70345af95f654c7514300ee44f63755af89d98438e101376c,Metadata:&ContainerMetadata{Name:packageserver,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUN
NING,CreatedAt:1628812375330470665,Labels:map[string]string{io.kubernetes.container.name: packageserver,io.kubernetes.pod.name: packageserver-6488c6c757-ncn95,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: d1fe5f4e-801b-43bb-8d91-f98278b9b7e0,},Annotations:map[string]string{io.kubernetes.container.hash: 1ed90569,io.kubernetes.container.ports: [{\"containerPort\":5443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83fae35060ce317927df9c26e190d57130d6bf5ca320082e659c93d859772586,PodSandboxId:6a3b5b8a7e1c5939406c4754ff6db153de573b3ca6860cdad0a07cb06c3ff361,Metadata:&ContainerMetadata{Name:registry-server,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0,Annotations:map[string]
string{},},ImageRef:quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0,State:CONTAINER_RUNNING,CreatedAt:1628812374485600354,Labels:map[string]string{io.kubernetes.container.name: registry-server,io.kubernetes.pod.name: operatorhubio-catalog-lsjpz,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 2088d6f1-a488-4bbd-be2e-5a3f6482ffd3,},Annotations:map[string]string{io.kubernetes.container.hash: a6fa37,io.kubernetes.container.ports: [{\"name\":\"grpc\",\"containerPort\":50051,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d73d646852fe4aa976ce0bd39287080be283652ecd5611a35d72cd03dfa3b2,PodSandboxId:fba557957f59a84847ff0a75ecba6ae1dad057e8e2caabae2f5f455c1ff052de,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{I
mage:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1628812331035309077,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-6dngl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4ba67514-8f40-41b0-ad6f-3db817ed7f3e,},Annotations:map[string]string{io.kubernetes.container.hash: 424b9e3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c992643e54f96cd3ce9001ef74b8bbf6dda8febaaebe4496e8614586458fb9f8,PodSandboxId:0bba02f5b4bb2245a2fdd76ea6d06a9740e7c8570d360ea5636fa4644152e896,Metadata:&ContainerMetadata{Name:cre
ate,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1628812330084737828,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jswnd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 35929738-f6a6-4263-91dd-5e83b165fc66,},Annotations:map[string]string{io.kubernetes.container.hash: 784a9392,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47de47c596fe3e327aa94468d4ee52f31f4cc9eed74ced1c79468444052c72b3,PodSandboxId:72c70654465a7b64a9a596f260c12513b66f36d39aaf6c5338f070a68af7ee27,
Metadata:&ContainerMetadata{Name:olm-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628812324661923427,Labels:map[string]string{io.kubernetes.container.name: olm-operator,io.kubernetes.pod.name: olm-operator-859c88c96-vsdlp,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 70ea73ed-8703-41ef-8b44-4383788352e1,},Annotations:map[string]string{io.kubernetes.container.hash: 456d2455,io.kubernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":8081,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminatio
nGracePeriod: 30,},},&Container{Id:3cc66b29ae716721d7cfc70ab42c7883f2440ff9da5a55396d21d971ec76e878,PodSandboxId:c3dbb81d724b8247891b3fd6a1b98092c6ddc807e400c3e1e72fde8a5842ecea,Metadata:&ContainerMetadata{Name:catalog-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628812324027304855,Labels:map[string]string{io.kubernetes.container.name: catalog-operator,io.kubernetes.pod.name: catalog-operator-75d496484d-x22rr,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 9ef3c8dc-b0a8-41e1-a9be-ad8ff5a7cd49,},Annotations:map[string]string{io.kubernetes.container.hash: 22ab0afe,io.kubernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":8081,\"protocol\":\"TCP\"}],io.kubernetes.cont
ainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b83db69e2b1957cd9c27a10214e1e150f3ebf424abed5d0d8af9e975dd330c37,PodSandboxId:3523d49ad00e35dbcb993902231434cb2254feae748306a826d2e9ecf43d6807,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628812299895268336,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c90fe00-a4ae-4fe6-bf9e-0ec3765e7f67,},Annotations:map[string]string{io.kubernetes.container.hash: 9d6a3819,io
.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:144d0dbab7b4bb72eeb618430d51609f43dcea9e62c5c5c6cfc53589202468dc,PodSandboxId:4bcf0afaab52b40716bd095cd300edd339f0fe0eca9524deb4f9189e4f3719e8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628812297745577752,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d2mcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 042d754e-0e03-4d55-b93b-ef111d733617,},Annotations:map[string]string{io.kubernetes.container.hash: 61224e33,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:252ee0cdeea822fe191a791748af57151814b3174fe9d7588e8b35af8180baa5,PodSandboxId:9a14ebd4bb39be8b1a2c719112f54e151d667bc4600adcc52bdb2e171863be40,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628812296301812126,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-nrmhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d29d822d-395c-41b5-b0e2-da279389bbec,},Annotations:map[string]string{io.kubernetes.container.hash: e8530073,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:832c0a74513bf5368b3dddeaa34591b32693c3d659a90e894db641d30945a94e,PodSandboxId:c8eddf71fb45c8b895a0205e186713a3cbfd052751758142deae5200187b79c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628812273986650559,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-20210812235029-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 1641765115f1c1317b2417660238322a,},Annotations:map[string]string{io.kubernetes.container.hash: 17a239e8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b610af4737f78f6e81098816b4d4bb32edad7e272eff91eedab1e1228915246,PodSandboxId:d414c06f351ebe53bbd8956c461fdef373a43169401904dfb8707770ac30753a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628812273685167504,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-20210812235029-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 002
18d7ecf1ada3625b3f1636c3a79de,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df7df43ae185dc0ace9a7e00d73739e38efa52af36ac1dd82116971a026fb789,PodSandboxId:8cfd5098d1c279eb09d72d1589c1c302e7a8df133b2650335202de95d57f8730,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628812273619122504,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-20210812235029-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914178d72
dc0b528da5ffaf8d8c376af,},Annotations:map[string]string{io.kubernetes.container.hash: 9cfa5d4f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d1f01676b8baf312ddbcf8715f52aa8544b29bb3b38375a6f3fa993969901fb,PodSandboxId:53e33974fb6735e31179ccc48bba01c3602647c854a1ba52ad1486f2dfc26a5e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628812273106325676,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-20210812235029-820289,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 143e03afe6ebd93d6ba969540d8c9889,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=803b850a-90ca-46bd-a1e6-83597b7a130b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 12 23:58:19 addons-20210812235029-820289 crio[2077]: time="2021-08-12 23:58:19.821831932Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7991e41e-524b-4b8b-93bd-75bd9b4d37aa name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Aug 12 23:58:19 addons-20210812235029-820289 crio[2077]: time="2021-08-12 23:58:19.822665655Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:ff8260447e181fa4ff44a2bd7a9d5f2f0944233b75334498e6a52e3bcb460218,Metadata:&PodSandboxMetadata{Name:etcd-operator-85cd4f54cd-c5jdd,Uid:a251dd29-51a3-45fd-aa38-d910126913ea,Namespace:my-etcd,Attempt:0,},State:SANDBOX_READY,CreatedAt:1628812480024453787,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-c5jdd,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: a251dd29-51a3-45fd-aa38-d910126913ea,name: etcd-operator-alm-owned,pod-template-hash: 85cd4f54cd,},Annotations:map[string]string{alm-examples: [\n  {\n    \"apiVersion\": \"etcd.database.coreos.com/v1beta2\",\n    \"kind\": \"EtcdCluster\",\n    \"metadata\": {\n      \"name\": \"example\"\n    },\n    \"spec\": {\n      \"size\": 3,\n      \"version\": \"3.2.13\"\n    }\n  },\n  {\n    \"apiVersion\": \"e
tcd.database.coreos.com/v1beta2\",\n    \"kind\": \"EtcdRestore\",\n    \"metadata\": {\n      \"name\": \"example-etcd-cluster-restore\"\n    },\n    \"spec\": {\n      \"etcdCluster\": {\n        \"name\": \"example-etcd-cluster\"\n      },\n      \"backupStorageType\": \"S3\",\n      \"s3\": {\n        \"path\": \"<full-s3-path>\",\n        \"awsSecret\": \"<aws-secret>\"\n      }\n    }\n  },\n  {\n    \"apiVersion\": \"etcd.database.coreos.com/v1beta2\",\n    \"kind\": \"EtcdBackup\",\n    \"metadata\": {\n      \"name\": \"example-etcd-cluster-backup\"\n    },\n    \"spec\": {\n      \"etcdEndpoints\": [\"<etcd-cluster-endpoints>\"],\n      \"storageType\":\"S3\",\n      \"s3\": {\n        \"path\": \"<full-s3-path>\",\n        \"awsSecret\": \"<aws-secret>\"\n      }\n    }\n  }\n]\n,capabilities: Full Lifecycle,categories: Database,containerImage: quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,createdAt: 2019-02-28 01:03:00,description: Create and
maintain highly-available etcd clusters on Kubernetes,kubernetes.io/config.seen: 2021-08-12T23:54:39.547856839Z,kubernetes.io/config.source: api,olm.operatorGroup: operatorgroup,olm.operatorNamespace: my-etcd,olm.targetNamespaces: my-etcd,operatorframework.io/properties: {\"properties\":[{\"type\":\"olm.gvk\",\"value\":{\"group\":\"etcd.database.coreos.com\",\"kind\":\"EtcdBackup\",\"version\":\"v1beta2\"}}]},repository: https://github.com/coreos/etcd-operator,tectonic-visibility: ocs,},RuntimeHandler:,},&PodSandbox{Id:6cfdc1bddbde4925f23c205676b88612dcba769396b29a83aaf0c37cc10779d8,Metadata:&PodSandboxMetadata{Name:private-image-eu-5956d58f9f-qhzdr,Uid:abed0c04-d2a3-4c50-899b-16a1db4432a3,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1628812469169366471,Labels:map[string]string{integration-test: private-image-eu,io.kubernetes.container.name: POD,io.kubernetes.pod.name: private-image-eu-5956d58f9f-qhzdr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: abed0c04-d2a3-4c50-899b-16a1db443
2a3,pod-template-hash: 5956d58f9f,},Annotations:map[string]string{kubernetes.io/config.seen: 2021-08-12T23:54:28.743650626Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:846a34bd0f2a21c414651bc1fb00563e9eb01b1e8b0f7b9c3ef0c5c23cc13204,Metadata:&PodSandboxMetadata{Name:nginx,Uid:7419f352-61f4-4cd1-b1a1-5f2622b3f293,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1628812446450183518,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7419f352-61f4-4cd1-b1a1-5f2622b3f293,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2021-08-12T23:54:06.027576199Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c3feaa6d0f2117a4a7528982f8956a5ab57388301f49a42ccdba7c9aa99636b0,Metadata:&PodSandboxMetadata{Name:private-image-7ff9c8c74f-vrcqj,Uid:e4c8cd4c-3a3d-4c12-8863-a7209f060be2,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1628812442805700048,
Labels:map[string]string{integration-test: private-image,io.kubernetes.container.name: POD,io.kubernetes.pod.name: private-image-7ff9c8c74f-vrcqj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4c8cd4c-3a3d-4c12-8863-a7209f060be2,pod-template-hash: 7ff9c8c74f,},Annotations:map[string]string{kubernetes.io/config.seen: 2021-08-12T23:54:02.402843321Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4a53b00e3fd62a312a1ece65ba5f7032a766e056449b25d85439b13677c59f00,Metadata:&PodSandboxMetadata{Name:busybox,Uid:32b4ebc6-1950-4af1-9d15-bd7d8ddbe52c,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1628812431564764948,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 32b4ebc6-1950-4af1-9d15-bd7d8ddbe52c,},Annotations:map[string]string{kubernetes.io/config.seen: 2021-08-12T23:53:51.208920573Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSand
box{Id:13b1d5e59e433a1225a5f85815fe59a76e6ff63588e9f368688dc2351f0631b5,Metadata:&PodSandboxMetadata{Name:packageserver-6488c6c757-lmf89,Uid:3f0a6c54-7925-430d-ab5f-a01d61a5edcb,Namespace:olm,Attempt:0,},State:SANDBOX_READY,CreatedAt:1628812329691200226,Labels:map[string]string{app: packageserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: packageserver-6488c6c757-lmf89,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 3f0a6c54-7925-430d-ab5f-a01d61a5edcb,pod-template-hash: 6488c6c757,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"operators.coreos.com/v1alpha1\",\"kind\":\"ClusterServiceVersion\",\"metadata\":{\"annotations\":{},\"labels\":{\"olm.version\":\"0.17.0\"},\"name\":\"packageserver\",\"namespace\":\"olm\"},\"spec\":{\"apiservicedefinitions\":{\"owned\":[{\"containerPort\":5443,\"deploymentName\":\"packageserver\",\"description\":\"A PackageManifest is a resource generated from existing CatalogSources and their ConfigMaps\",\"d
isplayName\":\"PackageManifest\",\"group\":\"packages.operators.coreos.com\",\"kind\":\"PackageManifest\",\"name\":\"packagemanifests\",\"version\":\"v1\"}]},\"description\":\"Represents an Operator package that is available from a given CatalogSource which will resolve to a ClusterServiceVersion.\",\"displayName\":\"Package Server\",\"install\":{\"spec\":{\"clusterPermissions\":[{\"rules\":[{\"apiGroups\":[\"authorization.k8s.io\"],\"resources\":[\"subjectaccessreviews\"],\"verbs\":[\"create\",\"get\"]},{\"apiGroups\":[\"\"],\"resources\":[\"configmaps\"],\"verbs\":[\"get\",\"list\",\"watch\"]},{\"apiGroups\":[\"operators.coreos.com\"],\"resources\":[\"catalogsources\"],\"verbs\":[\"get\",\"list\",\"watch\"]},{\"apiGroups\":[\"packages.operators.coreos.com\"],\"resources\":[\"packagemanifests\"],\"verbs\":[\"get\",\"list\"]}],\"serviceAccountName\":\"olm-operator-serviceaccount\"}],\"deployments\":[{\"name\":\"packageserver\",\"spec\":{\"replicas\":2,\"selector\":{\"matchLabels\":{\"app\":\"packageserver\"}}
,\"strategy\":{\"type\":\"RollingUpdate\"},\"template\":{\"metadata\":{\"labels\":{\"app\":\"packageserver\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/package-server\",\"-v=4\",\"--secure-port\",\"5443\",\"--global-namespace\",\"olm\"],\"image\":\"quay.io/operator-framework/olm:v0.17.0@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607\",\"imagePullPolicy\":\"Always\",\"livenessProbe\":{\"httpGet\":{\"path\":\"/healthz\",\"port\":5443,\"scheme\":\"HTTPS\"}},\"name\":\"packageserver\",\"ports\":[{\"containerPort\":5443}],\"readinessProbe\":{\"httpGet\":{\"path\":\"/healthz\",\"port\":5443,\"scheme\":\"HTTPS\"}},\"resources\":{\"requests\":{\"cpu\":\"10m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"runAsUser\":1000},\"terminationMessagePolicy\":\"FallbackToLogsOnError\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmpfs\"}]}],\"nodeSelector\":{\"kubernetes.io/os\":\"linux\"},\"serviceAccountName\":\"olm-operator-serviceaccount\",\"volumes\":[{\"emptyDir\":{},\"name\":\"tmpf
s\"}]}}}}]},\"strategy\":\"deployment\"},\"installModes\":[{\"supported\":true,\"type\":\"OwnNamespace\"},{\"supported\":true,\"type\":\"SingleNamespace\"},{\"supported\":true,\"type\":\"MultiNamespace\"},{\"supported\":true,\"type\":\"AllNamespaces\"}],\"keywords\":[\"packagemanifests\",\"olm\",\"packages\"],\"links\":[{\"name\":\"Package Server\",\"url\":\"https://github.com/operator-framework/operator-lifecycle-manager/tree/master/pkg/package-server\"}],\"maintainers\":[{\"email\":\"openshift-operators@redhat.com\",\"name\":\"Red Hat\"}],\"maturity\":\"alpha\",\"minKubeVersion\":\"1.11.0\",\"provider\":{\"name\":\"Red Hat\"},\"version\":\"0.17.0\"}}\n,kubernetes.io/config.seen: 2021-08-12T23:52:06.909670209Z,kubernetes.io/config.source: api,olm.operatorGroup: olm-operators,olm.operatorNamespace: olm,olm.targetNamespaces: olm,olmcahash: ece0e17be0b824c91ee261db367fb508a52cca22f505daa9546481410d543938,},RuntimeHandler:,},&PodSandbox{Id:162cbd30544c84d70345af95f654c7514300ee44f63755af89d98438e101376c,Metadata
:&PodSandboxMetadata{Name:packageserver-6488c6c757-ncn95,Uid:d1fe5f4e-801b-43bb-8d91-f98278b9b7e0,Namespace:olm,Attempt:0,},State:SANDBOX_READY,CreatedAt:1628812329671917371,Labels:map[string]string{app: packageserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: packageserver-6488c6c757-ncn95,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: d1fe5f4e-801b-43bb-8d91-f98278b9b7e0,pod-template-hash: 6488c6c757,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"operators.coreos.com/v1alpha1\",\"kind\":\"ClusterServiceVersion\",\"metadata\":{\"annotations\":{},\"labels\":{\"olm.version\":\"0.17.0\"},\"name\":\"packageserver\",\"namespace\":\"olm\"},\"spec\":{\"apiservicedefinitions\":{\"owned\":[{\"containerPort\":5443,\"deploymentName\":\"packageserver\",\"description\":\"A PackageManifest is a resource generated from existing CatalogSources and their ConfigMaps\",\"displayName\":\"PackageManifest\",\"group\":\"packages.operators.coreos.com\",\"k
ind\":\"PackageManifest\",\"name\":\"packagemanifests\",\"version\":\"v1\"}]},\"description\":\"Represents an Operator package that is available from a given CatalogSource which will resolve to a ClusterServiceVersion.\",\"displayName\":\"Package Server\",\"install\":{\"spec\":{\"clusterPermissions\":[{\"rules\":[{\"apiGroups\":[\"authorization.k8s.io\"],\"resources\":[\"subjectaccessreviews\"],\"verbs\":[\"create\",\"get\"]},{\"apiGroups\":[\"\"],\"resources\":[\"configmaps\"],\"verbs\":[\"get\",\"list\",\"watch\"]},{\"apiGroups\":[\"operators.coreos.com\"],\"resources\":[\"catalogsources\"],\"verbs\":[\"get\",\"list\",\"watch\"]},{\"apiGroups\":[\"packages.operators.coreos.com\"],\"resources\":[\"packagemanifests\"],\"verbs\":[\"get\",\"list\"]}],\"serviceAccountName\":\"olm-operator-serviceaccount\"}],\"deployments\":[{\"name\":\"packageserver\",\"spec\":{\"replicas\":2,\"selector\":{\"matchLabels\":{\"app\":\"packageserver\"}},\"strategy\":{\"type\":\"RollingUpdate\"},\"template\":{\"metadata\":{\"labels\
":{\"app\":\"packageserver\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/package-server\",\"-v=4\",\"--secure-port\",\"5443\",\"--global-namespace\",\"olm\"],\"image\":\"quay.io/operator-framework/olm:v0.17.0@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607\",\"imagePullPolicy\":\"Always\",\"livenessProbe\":{\"httpGet\":{\"path\":\"/healthz\",\"port\":5443,\"scheme\":\"HTTPS\"}},\"name\":\"packageserver\",\"ports\":[{\"containerPort\":5443}],\"readinessProbe\":{\"httpGet\":{\"path\":\"/healthz\",\"port\":5443,\"scheme\":\"HTTPS\"}},\"resources\":{\"requests\":{\"cpu\":\"10m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"runAsUser\":1000},\"terminationMessagePolicy\":\"FallbackToLogsOnError\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmpfs\"}]}],\"nodeSelector\":{\"kubernetes.io/os\":\"linux\"},\"serviceAccountName\":\"olm-operator-serviceaccount\",\"volumes\":[{\"emptyDir\":{},\"name\":\"tmpfs\"}]}}}}]},\"strategy\":\"deployment\"},\"installModes\":[{\"supported\":true,\
"type\":\"OwnNamespace\"},{\"supported\":true,\"type\":\"SingleNamespace\"},{\"supported\":true,\"type\":\"MultiNamespace\"},{\"supported\":true,\"type\":\"AllNamespaces\"}],\"keywords\":[\"packagemanifests\",\"olm\",\"packages\"],\"links\":[{\"name\":\"Package Server\",\"url\":\"https://github.com/operator-framework/operator-lifecycle-manager/tree/master/pkg/package-server\"}],\"maintainers\":[{\"email\":\"openshift-operators@redhat.com\",\"name\":\"Red Hat\"}],\"maturity\":\"alpha\",\"minKubeVersion\":\"1.11.0\",\"provider\":{\"name\":\"Red Hat\"},\"version\":\"0.17.0\"}}\n,kubernetes.io/config.seen: 2021-08-12T23:52:06.912106820Z,kubernetes.io/config.source: api,olm.operatorGroup: olm-operators,olm.operatorNamespace: olm,olm.targetNamespaces: olm,olmcahash: ece0e17be0b824c91ee261db367fb508a52cca22f505daa9546481410d543938,},RuntimeHandler:,},&PodSandbox{Id:6a3b5b8a7e1c5939406c4754ff6db153de573b3ca6860cdad0a07cb06c3ff361,Metadata:&PodSandboxMetadata{Name:operatorhubio-catalog-lsjpz,Uid:2088d6f1-a488-4bbd-be2
e-5a3f6482ffd3,Namespace:olm,Attempt:0,},State:SANDBOX_READY,CreatedAt:1628812324915607817,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: operatorhubio-catalog-lsjpz,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 2088d6f1-a488-4bbd-be2e-5a3f6482ffd3,olm.catalogSource: operatorhubio-catalog,},Annotations:map[string]string{kubernetes.io/config.seen: 2021-08-12T23:52:04.538465585Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c3dbb81d724b8247891b3fd6a1b98092c6ddc807e400c3e1e72fde8a5842ecea,Metadata:&PodSandboxMetadata{Name:catalog-operator-75d496484d-x22rr,Uid:9ef3c8dc-b0a8-41e1-a9be-ad8ff5a7cd49,Namespace:olm,Attempt:0,},State:SANDBOX_READY,CreatedAt:1628812306963338555,Labels:map[string]string{app: catalog-operator,io.kubernetes.container.name: POD,io.kubernetes.pod.name: catalog-operator-75d496484d-x22rr,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 9ef3c8dc-b0a8-41e1-a9be-ad8ff5a7cd49,pod-template-hash: 75d496484d,},Annotations:map[
string]string{kubernetes.io/config.seen: 2021-08-12T23:51:45.772188998Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:72c70654465a7b64a9a596f260c12513b66f36d39aaf6c5338f070a68af7ee27,Metadata:&PodSandboxMetadata{Name:olm-operator-859c88c96-vsdlp,Uid:70ea73ed-8703-41ef-8b44-4383788352e1,Namespace:olm,Attempt:0,},State:SANDBOX_READY,CreatedAt:1628812306459109005,Labels:map[string]string{app: olm-operator,io.kubernetes.container.name: POD,io.kubernetes.pod.name: olm-operator-859c88c96-vsdlp,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 70ea73ed-8703-41ef-8b44-4383788352e1,pod-template-hash: 859c88c96,},Annotations:map[string]string{kubernetes.io/config.seen: 2021-08-12T23:51:45.590322459Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3523d49ad00e35dbcb993902231434cb2254feae748306a826d2e9ecf43d6807,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:8c90fe00-a4ae-4fe6-bf9e-0ec3765e7f67,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:16288
12299191332795,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c90fe00-a4ae-4fe6-bf9e-0ec3765e7f67,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"n
ame\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2021-08-12T23:51:38.528439992Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4bcf0afaab52b40716bd095cd300edd339f0fe0eca9524deb4f9189e4f3719e8,Metadata:&PodSandboxMetadata{Name:kube-proxy-d2mcw,Uid:042d754e-0e03-4d55-b93b-ef111d733617,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1628812296829097630,Labels:map[string]string{controller-revision-hash: 7cdcb64568,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-d2mcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 042d754e-0e03-4d55-b93b-ef111d733617,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2021-08-12T23:51:34.980855080Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9a14ebd4bb39be8b1a2c719112f54e151d667bc4600adcc52bdb2e171863be40,Metadata:&PodSandboxMetadata{Name:coredns-558bd4d5db-nrmhx,Uid:d29d822d-395c-41b5-b0e2-da279389bbec,Namespace:kube-system,Attempt
:0,},State:SANDBOX_READY,CreatedAt:1628812295419809923,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-558bd4d5db-nrmhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d29d822d-395c-41b5-b0e2-da279389bbec,k8s-app: kube-dns,pod-template-hash: 558bd4d5db,},Annotations:map[string]string{kubernetes.io/config.seen: 2021-08-12T23:51:35.073546321Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8cfd5098d1c279eb09d72d1589c1c302e7a8df133b2650335202de95d57f8730,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-20210812235029-820289,Uid:914178d72dc0b528da5ffaf8d8c376af,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1628812272351247261,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-20210812235029-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914178d72dc0b528da5ffaf8d8c376af,tier: control-plane,},Annotations:map[st
ring]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.112:8443,kubernetes.io/config.hash: 914178d72dc0b528da5ffaf8d8c376af,kubernetes.io/config.seen: 2021-08-12T23:51:10.445330102Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:53e33974fb6735e31179ccc48bba01c3602647c854a1ba52ad1486f2dfc26a5e,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-20210812235029-820289,Uid:143e03afe6ebd93d6ba969540d8c9889,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1628812272340069436,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-20210812235029-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 143e03afe6ebd93d6ba969540d8c9889,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 143e03afe6ebd93d6ba969540d8c9889,kubernetes.io/config.seen: 2021-08-12T23:51:10.445332658Z,kubernetes.io/config.source:
file,},RuntimeHandler:,},&PodSandbox{Id:c8eddf71fb45c8b895a0205e186713a3cbfd052751758142deae5200187b79c5,Metadata:&PodSandboxMetadata{Name:etcd-addons-20210812235029-820289,Uid:1641765115f1c1317b2417660238322a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1628812272329487214,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-20210812235029-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1641765115f1c1317b2417660238322a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.112:2379,kubernetes.io/config.hash: 1641765115f1c1317b2417660238322a,kubernetes.io/config.seen: 2021-08-12T23:51:10.445316203Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d414c06f351ebe53bbd8956c461fdef373a43169401904dfb8707770ac30753a,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-20210812235029-820289,Uid:00218d7ecf1ada3625b3f1636c3a79de,Nam
espace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1628812272274748531,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-20210812235029-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00218d7ecf1ada3625b3f1636c3a79de,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 00218d7ecf1ada3625b3f1636c3a79de,kubernetes.io/config.seen: 2021-08-12T23:51:10.445334819Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=7991e41e-524b-4b8b-93bd-75bd9b4d37aa name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Aug 12 23:58:19 addons-20210812235029-820289 crio[2077]: time="2021-08-12 23:58:19.824547600Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8420bc39-6d6f-4dbd-8db4-d89c9551716a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 12 23:58:19 addons-20210812235029-820289 crio[2077]: time="2021-08-12 23:58:19.824599210Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8420bc39-6d6f-4dbd-8db4-d89c9551716a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 12 23:58:19 addons-20210812235029-820289 crio[2077]: time="2021-08-12 23:58:19.824954953Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:24ca02020cbbcefbb4301721f21a1ab5f5f0d233591bff87384e21357f681688,PodSandboxId:ff8260447e181fa4ff44a2bd7a9d5f2f0944233b75334498e6a52e3bcb460218,Metadata:&ContainerMetadata{Name:etcd-restore-operator,Attempt:0,},Image:&ImageSpec{Image:9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628812496407685607,Labels:map[string]string{io.kubernetes.container.name: etcd-restore-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-c5jdd,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: a251dd29-51a3-45fd-aa38-d910126913ea,},Annotations:map[string]string{io.kubernetes.container.hash: 6a3106c4,io.kubernetes.container.restartCoun
t: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d04086fbc6f40fc45bf4ec039228b8bf21b6aee8665969cb4eca57451130a7b,PodSandboxId:ff8260447e181fa4ff44a2bd7a9d5f2f0944233b75334498e6a52e3bcb460218,Metadata:&ContainerMetadata{Name:etcd-backup-operator,Attempt:0,},Image:&ImageSpec{Image:9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628812495914374041,Labels:map[string]string{io.kubernetes.container.name: etcd-backup-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-c5jdd,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: a251dd29-51a3-45fd-aa38-d910126913ea,},Annotations:map[string]string{io.kubernetes.container.hash: e9142ec4,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ccfe0fabcfc227a5f5d8af68a480147d136b0325a3171fae86f0500ead9e2e7,PodSandboxId:ff8260447e181fa4ff44a2bd7a9d5f2f0944233b75334498e6a52e3bcb460218,Metadata:&ContainerMetadata{Name:etcd-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628812495137946973,Labels:map[string]string{io.kubernetes.container.name: etcd-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-c5jdd,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: a251dd29-51a3-45fd-aa38-d910126913ea,},Annotations:map[string]string{io.kubernetes.container.hash: a4475c14,io.kubernetes.conta
iner.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26ebec6a37c253bc83dea77a39b63637ac39751a5a7db0f933e5ff9145716f62,PodSandboxId:6cfdc1bddbde4925f23c205676b88612dcba769396b29a83aaf0c37cc10779d8,Metadata:&ContainerMetadata{Name:private-image-eu,Attempt:0,},Image:&ImageSpec{Image:europe-west1-docker.pkg.dev/k8s-minikube/test-artifacts-eu/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,Annotations:map[string]string{},},ImageRef:europe-west1-docker.pkg.dev/k8s-minikube/test-artifacts-eu/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,State:CONTAINER_RUNNING,CreatedAt:1628812486994333072,Labels:map[string]string{io.kubernetes.container.name: private-image-eu,io.kubernetes.pod.name: private-image-eu-5956d58f9f-qhzdr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: abed0c04-d2a3-4c5
0-899b-16a1db4432a3,},Annotations:map[string]string{io.kubernetes.container.hash: 75354c1d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:589183e860536b37198eebf41ddd7769db7d1722c08aa9cae51ca788603fe806,PodSandboxId:846a34bd0f2a21c414651bc1fb00563e9eb01b1e8b0f7b9c3ef0c5c23cc13204,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce,State:CONTAINER_RUNNING,CreatedAt:1628812467777458064,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7419f352-61f4-4cd1-b1a1-5f2622b3f293,},Annotation
s:map[string]string{io.kubernetes.container.hash: 439f59e2,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:878beb2424135ce004d37826518b3ced2d8a28b834227a0530524c2602b7a393,PodSandboxId:c3feaa6d0f2117a4a7528982f8956a5ab57388301f49a42ccdba7c9aa99636b0,Metadata:&ContainerMetadata{Name:private-image,Attempt:0,},Image:&ImageSpec{Image:us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,Annotations:map[string]string{},},ImageRef:us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,State:CONTAINER_RUNNING,CreatedAt:1628812463068710000,Labels:map[string]string{io.kubernetes.container.name: private-image,io.kubernete
s.pod.name: private-image-7ff9c8c74f-vrcqj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4c8cd4c-3a3d-4c12-8863-a7209f060be2,},Annotations:map[string]string{io.kubernetes.container.hash: c7e432c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfd76cbf91426603b0d7b370f45186bfc7349cb6acd625e74b50f2c97d6d5462,PodSandboxId:4a53b00e3fd62a312a1ece65ba5f7032a766e056449b25d85439b13677c59f00,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,State:CONTAINER_RUNNING,CreatedAt:1628812434394331053,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernet
es.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 32b4ebc6-1950-4af1-9d15-bd7d8ddbe52c,},Annotations:map[string]string{io.kubernetes.container.hash: 85f81115,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79637d5190536c29c355e710c67cd16200301bac18136d33c0a8a5e5f866b76f,PodSandboxId:13b1d5e59e433a1225a5f85815fe59a76e6ff63588e9f368688dc2351f0631b5,Metadata:&ContainerMetadata{Name:packageserver,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628812375743364158,Labels:map[string]string{io.kubernetes.container.name: packageserver,io.kubernet
es.pod.name: packageserver-6488c6c757-lmf89,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 3f0a6c54-7925-430d-ab5f-a01d61a5edcb,},Annotations:map[string]string{io.kubernetes.container.hash: 42aad7cc,io.kubernetes.container.ports: [{\"containerPort\":5443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5e00a6b90a5271ce54b61377e461e4d88a19fd90dc06863d5a746577334ad1,PodSandboxId:162cbd30544c84d70345af95f654c7514300ee44f63755af89d98438e101376c,Metadata:&ContainerMetadata{Name:packageserver,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUN
NING,CreatedAt:1628812375330470665,Labels:map[string]string{io.kubernetes.container.name: packageserver,io.kubernetes.pod.name: packageserver-6488c6c757-ncn95,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: d1fe5f4e-801b-43bb-8d91-f98278b9b7e0,},Annotations:map[string]string{io.kubernetes.container.hash: 1ed90569,io.kubernetes.container.ports: [{\"containerPort\":5443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83fae35060ce317927df9c26e190d57130d6bf5ca320082e659c93d859772586,PodSandboxId:6a3b5b8a7e1c5939406c4754ff6db153de573b3ca6860cdad0a07cb06c3ff361,Metadata:&ContainerMetadata{Name:registry-server,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0,Annotations:map[string]
string{},},ImageRef:quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0,State:CONTAINER_RUNNING,CreatedAt:1628812374485600354,Labels:map[string]string{io.kubernetes.container.name: registry-server,io.kubernetes.pod.name: operatorhubio-catalog-lsjpz,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 2088d6f1-a488-4bbd-be2e-5a3f6482ffd3,},Annotations:map[string]string{io.kubernetes.container.hash: a6fa37,io.kubernetes.container.ports: [{\"name\":\"grpc\",\"containerPort\":50051,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47de47c596fe3e327aa94468d4ee52f31f4cc9eed74ced1c79468444052c72b3,PodSandboxId:72c70654465a7b64a9a596f260c12513b66f36d39aaf6c5338f070a68af7ee27,Metadata:&ContainerMetadata{Name:olm-operator,Attempt:0,},Image:&Imag
eSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628812324661923427,Labels:map[string]string{io.kubernetes.container.name: olm-operator,io.kubernetes.pod.name: olm-operator-859c88c96-vsdlp,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 70ea73ed-8703-41ef-8b44-4383788352e1,},Annotations:map[string]string{io.kubernetes.container.hash: 456d2455,io.kubernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":8081,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cc66b29ae716721d7cfc70ab42c7883f2
440ff9da5a55396d21d971ec76e878,PodSandboxId:c3dbb81d724b8247891b3fd6a1b98092c6ddc807e400c3e1e72fde8a5842ecea,Metadata:&ContainerMetadata{Name:catalog-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628812324027304855,Labels:map[string]string{io.kubernetes.container.name: catalog-operator,io.kubernetes.pod.name: catalog-operator-75d496484d-x22rr,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 9ef3c8dc-b0a8-41e1-a9be-ad8ff5a7cd49,},Annotations:map[string]string{io.kubernetes.container.hash: 22ab0afe,io.kubernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":8081,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b83db69e2b1957cd9c27a10214e1e150f3ebf424abed5d0d8af9e975dd330c37,PodSandboxId:3523d49ad00e35dbcb993902231434cb2254feae748306a826d2e9ecf43d6807,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628812299895268336,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c90fe00-a4ae-4fe6-bf9e-0ec3765e7f67,},Annotations:map[string]string{io.kubernetes.container.hash: 9d6a3819,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:144d0dbab7b4bb72eeb618430d51609f43dcea9e62c5c5c6cfc53589202468dc,PodSandboxId:4bcf0afaab52b40716bd095cd300edd339f0fe0eca9524deb4f9189e4f3719e8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628812297745577752,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d2mcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 042d754e-0e03-4d55-b93b-ef111d733617,},Annotations:map[string]string{io.kubernetes.container.hash: 61224e33,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:252ee0cdeea822fe191a791748af57151814b3174fe9d7588e8b35af8180baa5,PodSandboxId:9a14ebd4bb39be8b1a2c719112f54e151d667bc4600adcc52bdb2e171863be40,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628812296301812126,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-nrmhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d29d822d-395c-41b5-b0e2-da279389bbec,},Annotations:map[string]string{io.kubernetes.container.hash: e8530073,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protoc
ol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:832c0a74513bf5368b3dddeaa34591b32693c3d659a90e894db641d30945a94e,PodSandboxId:c8eddf71fb45c8b895a0205e186713a3cbfd052751758142deae5200187b79c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628812273986650559,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-20210812235029-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1641765115f1c1317b2417660238322a,},Annotations:map[string]string{io
.kubernetes.container.hash: 17a239e8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b610af4737f78f6e81098816b4d4bb32edad7e272eff91eedab1e1228915246,PodSandboxId:d414c06f351ebe53bbd8956c461fdef373a43169401904dfb8707770ac30753a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628812273685167504,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-20210812235029-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00218d7ecf1ada3625b3f1636c3a79de,},Annotations:map[string]string{io.kube
rnetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df7df43ae185dc0ace9a7e00d73739e38efa52af36ac1dd82116971a026fb789,PodSandboxId:8cfd5098d1c279eb09d72d1589c1c302e7a8df133b2650335202de95d57f8730,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628812273619122504,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-20210812235029-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914178d72dc0b528da5ffaf8d8c376af,},Annotations:map[string]string{io.kubernetes
.container.hash: 9cfa5d4f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d1f01676b8baf312ddbcf8715f52aa8544b29bb3b38375a6f3fa993969901fb,PodSandboxId:53e33974fb6735e31179ccc48bba01c3602647c854a1ba52ad1486f2dfc26a5e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628812273106325676,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-20210812235029-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 143e03afe6ebd93d6ba969540d8c9889,},Annotations:
map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8420bc39-6d6f-4dbd-8db4-d89c9551716a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 12 23:58:19 addons-20210812235029-820289 crio[2077]: time="2021-08-12 23:58:19.834749141Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1f19a13a-ed4d-486c-97f4-ffc97437a113 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 12 23:58:19 addons-20210812235029-820289 crio[2077]: time="2021-08-12 23:58:19.834896339Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1f19a13a-ed4d-486c-97f4-ffc97437a113 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 12 23:58:19 addons-20210812235029-820289 crio[2077]: time="2021-08-12 23:58:19.838576055Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:24ca02020cbbcefbb4301721f21a1ab5f5f0d233591bff87384e21357f681688,PodSandboxId:ff8260447e181fa4ff44a2bd7a9d5f2f0944233b75334498e6a52e3bcb460218,Metadata:&ContainerMetadata{Name:etcd-restore-operator,Attempt:0,},Image:&ImageSpec{Image:9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628812496407685607,Labels:map[string]string{io.kubernetes.container.name: etcd-restore-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-c5jdd,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: a251dd29-51a3-45fd-aa38-d910126913ea,},Annotations:map[string]string{io.kubernetes.container.hash: 6a3106c4,io.kubernetes.container.restartCoun
t: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d04086fbc6f40fc45bf4ec039228b8bf21b6aee8665969cb4eca57451130a7b,PodSandboxId:ff8260447e181fa4ff44a2bd7a9d5f2f0944233b75334498e6a52e3bcb460218,Metadata:&ContainerMetadata{Name:etcd-backup-operator,Attempt:0,},Image:&ImageSpec{Image:9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628812495914374041,Labels:map[string]string{io.kubernetes.container.name: etcd-backup-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-c5jdd,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: a251dd29-51a3-45fd-aa38-d910126913ea,},Annotations:map[string]string{io.kubernetes.container.hash: e9142ec4,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ccfe0fabcfc227a5f5d8af68a480147d136b0325a3171fae86f0500ead9e2e7,PodSandboxId:ff8260447e181fa4ff44a2bd7a9d5f2f0944233b75334498e6a52e3bcb460218,Metadata:&ContainerMetadata{Name:etcd-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628812495137946973,Labels:map[string]string{io.kubernetes.container.name: etcd-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-c5jdd,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: a251dd29-51a3-45fd-aa38-d910126913ea,},Annotations:map[string]string{io.kubernetes.container.hash: a4475c14,io.kubernetes.conta
iner.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26ebec6a37c253bc83dea77a39b63637ac39751a5a7db0f933e5ff9145716f62,PodSandboxId:6cfdc1bddbde4925f23c205676b88612dcba769396b29a83aaf0c37cc10779d8,Metadata:&ContainerMetadata{Name:private-image-eu,Attempt:0,},Image:&ImageSpec{Image:europe-west1-docker.pkg.dev/k8s-minikube/test-artifacts-eu/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,Annotations:map[string]string{},},ImageRef:europe-west1-docker.pkg.dev/k8s-minikube/test-artifacts-eu/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,State:CONTAINER_RUNNING,CreatedAt:1628812486994333072,Labels:map[string]string{io.kubernetes.container.name: private-image-eu,io.kubernetes.pod.name: private-image-eu-5956d58f9f-qhzdr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: abed0c04-d2a3-4c5
0-899b-16a1db4432a3,},Annotations:map[string]string{io.kubernetes.container.hash: 75354c1d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:589183e860536b37198eebf41ddd7769db7d1722c08aa9cae51ca788603fe806,PodSandboxId:846a34bd0f2a21c414651bc1fb00563e9eb01b1e8b0f7b9c3ef0c5c23cc13204,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce,State:CONTAINER_RUNNING,CreatedAt:1628812467777458064,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7419f352-61f4-4cd1-b1a1-5f2622b3f293,},Annotation
s:map[string]string{io.kubernetes.container.hash: 439f59e2,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:878beb2424135ce004d37826518b3ced2d8a28b834227a0530524c2602b7a393,PodSandboxId:c3feaa6d0f2117a4a7528982f8956a5ab57388301f49a42ccdba7c9aa99636b0,Metadata:&ContainerMetadata{Name:private-image,Attempt:0,},Image:&ImageSpec{Image:us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,Annotations:map[string]string{},},ImageRef:us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,State:CONTAINER_RUNNING,CreatedAt:1628812463068710000,Labels:map[string]string{io.kubernetes.container.name: private-image,io.kubernete
s.pod.name: private-image-7ff9c8c74f-vrcqj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4c8cd4c-3a3d-4c12-8863-a7209f060be2,},Annotations:map[string]string{io.kubernetes.container.hash: c7e432c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfd76cbf91426603b0d7b370f45186bfc7349cb6acd625e74b50f2c97d6d5462,PodSandboxId:4a53b00e3fd62a312a1ece65ba5f7032a766e056449b25d85439b13677c59f00,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,State:CONTAINER_RUNNING,CreatedAt:1628812434394331053,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernet
es.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 32b4ebc6-1950-4af1-9d15-bd7d8ddbe52c,},Annotations:map[string]string{io.kubernetes.container.hash: 85f81115,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79637d5190536c29c355e710c67cd16200301bac18136d33c0a8a5e5f866b76f,PodSandboxId:13b1d5e59e433a1225a5f85815fe59a76e6ff63588e9f368688dc2351f0631b5,Metadata:&ContainerMetadata{Name:packageserver,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628812375743364158,Labels:map[string]string{io.kubernetes.container.name: packageserver,io.kubernet
es.pod.name: packageserver-6488c6c757-lmf89,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 3f0a6c54-7925-430d-ab5f-a01d61a5edcb,},Annotations:map[string]string{io.kubernetes.container.hash: 42aad7cc,io.kubernetes.container.ports: [{\"containerPort\":5443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5e00a6b90a5271ce54b61377e461e4d88a19fd90dc06863d5a746577334ad1,PodSandboxId:162cbd30544c84d70345af95f654c7514300ee44f63755af89d98438e101376c,Metadata:&ContainerMetadata{Name:packageserver,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUN
NING,CreatedAt:1628812375330470665,Labels:map[string]string{io.kubernetes.container.name: packageserver,io.kubernetes.pod.name: packageserver-6488c6c757-ncn95,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: d1fe5f4e-801b-43bb-8d91-f98278b9b7e0,},Annotations:map[string]string{io.kubernetes.container.hash: 1ed90569,io.kubernetes.container.ports: [{\"containerPort\":5443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83fae35060ce317927df9c26e190d57130d6bf5ca320082e659c93d859772586,PodSandboxId:6a3b5b8a7e1c5939406c4754ff6db153de573b3ca6860cdad0a07cb06c3ff361,Metadata:&ContainerMetadata{Name:registry-server,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0,Annotations:map[string]
string{},},ImageRef:quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0,State:CONTAINER_RUNNING,CreatedAt:1628812374485600354,Labels:map[string]string{io.kubernetes.container.name: registry-server,io.kubernetes.pod.name: operatorhubio-catalog-lsjpz,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 2088d6f1-a488-4bbd-be2e-5a3f6482ffd3,},Annotations:map[string]string{io.kubernetes.container.hash: a6fa37,io.kubernetes.container.ports: [{\"name\":\"grpc\",\"containerPort\":50051,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d73d646852fe4aa976ce0bd39287080be283652ecd5611a35d72cd03dfa3b2,PodSandboxId:fba557957f59a84847ff0a75ecba6ae1dad057e8e2caabae2f5f455c1ff052de,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{I
mage:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1628812331035309077,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-6dngl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4ba67514-8f40-41b0-ad6f-3db817ed7f3e,},Annotations:map[string]string{io.kubernetes.container.hash: 424b9e3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c992643e54f96cd3ce9001ef74b8bbf6dda8febaaebe4496e8614586458fb9f8,PodSandboxId:0bba02f5b4bb2245a2fdd76ea6d06a9740e7c8570d360ea5636fa4644152e896,Metadata:&ContainerMetadata{Name:cre
ate,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1628812330084737828,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jswnd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 35929738-f6a6-4263-91dd-5e83b165fc66,},Annotations:map[string]string{io.kubernetes.container.hash: 784a9392,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47de47c596fe3e327aa94468d4ee52f31f4cc9eed74ced1c79468444052c72b3,PodSandboxId:72c70654465a7b64a9a596f260c12513b66f36d39aaf6c5338f070a68af7ee27,
Metadata:&ContainerMetadata{Name:olm-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628812324661923427,Labels:map[string]string{io.kubernetes.container.name: olm-operator,io.kubernetes.pod.name: olm-operator-859c88c96-vsdlp,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 70ea73ed-8703-41ef-8b44-4383788352e1,},Annotations:map[string]string{io.kubernetes.container.hash: 456d2455,io.kubernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":8081,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminatio
nGracePeriod: 30,},},&Container{Id:3cc66b29ae716721d7cfc70ab42c7883f2440ff9da5a55396d21d971ec76e878,PodSandboxId:c3dbb81d724b8247891b3fd6a1b98092c6ddc807e400c3e1e72fde8a5842ecea,Metadata:&ContainerMetadata{Name:catalog-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628812324027304855,Labels:map[string]string{io.kubernetes.container.name: catalog-operator,io.kubernetes.pod.name: catalog-operator-75d496484d-x22rr,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 9ef3c8dc-b0a8-41e1-a9be-ad8ff5a7cd49,},Annotations:map[string]string{io.kubernetes.container.hash: 22ab0afe,io.kubernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":8081,\"protocol\":\"TCP\"}],io.kubernetes.cont
ainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b83db69e2b1957cd9c27a10214e1e150f3ebf424abed5d0d8af9e975dd330c37,PodSandboxId:3523d49ad00e35dbcb993902231434cb2254feae748306a826d2e9ecf43d6807,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628812299895268336,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c90fe00-a4ae-4fe6-bf9e-0ec3765e7f67,},Annotations:map[string]string{io.kubernetes.container.hash: 9d6a3819,io
.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:144d0dbab7b4bb72eeb618430d51609f43dcea9e62c5c5c6cfc53589202468dc,PodSandboxId:4bcf0afaab52b40716bd095cd300edd339f0fe0eca9524deb4f9189e4f3719e8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628812297745577752,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d2mcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 042d754e-0e03-4d55-b93b-ef111d733617,},Annotations:map[string]string{io.kubernetes.container.hash: 61224e33,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:252ee0cdeea822fe191a791748af57151814b3174fe9d7588e8b35af8180baa5,PodSandboxId:9a14ebd4bb39be8b1a2c719112f54e151d667bc4600adcc52bdb2e171863be40,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628812296301812126,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-nrmhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d29d822d-395c-41b5-b0e2-da279389bbec,},Annotations:map[string]string{io.kubernetes.container.hash: e8530073,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:832c0a74513bf5368b3dddeaa34591b32693c3d659a90e894db641d30945a94e,PodSandboxId:c8eddf71fb45c8b895a0205e186713a3cbfd052751758142deae5200187b79c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628812273986650559,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-20210812235029-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 1641765115f1c1317b2417660238322a,},Annotations:map[string]string{io.kubernetes.container.hash: 17a239e8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b610af4737f78f6e81098816b4d4bb32edad7e272eff91eedab1e1228915246,PodSandboxId:d414c06f351ebe53bbd8956c461fdef373a43169401904dfb8707770ac30753a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628812273685167504,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-20210812235029-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 002
18d7ecf1ada3625b3f1636c3a79de,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df7df43ae185dc0ace9a7e00d73739e38efa52af36ac1dd82116971a026fb789,PodSandboxId:8cfd5098d1c279eb09d72d1589c1c302e7a8df133b2650335202de95d57f8730,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628812273619122504,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-20210812235029-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914178d72
dc0b528da5ffaf8d8c376af,},Annotations:map[string]string{io.kubernetes.container.hash: 9cfa5d4f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d1f01676b8baf312ddbcf8715f52aa8544b29bb3b38375a6f3fa993969901fb,PodSandboxId:53e33974fb6735e31179ccc48bba01c3602647c854a1ba52ad1486f2dfc26a5e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628812273106325676,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-20210812235029-820289,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 143e03afe6ebd93d6ba969540d8c9889,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1f19a13a-ed4d-486c-97f4-ffc97437a113 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 12 23:58:19 addons-20210812235029-820289 crio[2077]: time="2021-08-12 23:58:19.881389190Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=74593da7-c358-42d6-b6d5-051dc408168a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 12 23:58:19 addons-20210812235029-820289 crio[2077]: time="2021-08-12 23:58:19.881523329Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=74593da7-c358-42d6-b6d5-051dc408168a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 12 23:58:19 addons-20210812235029-820289 crio[2077]: time="2021-08-12 23:58:19.881923305Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:24ca02020cbbcefbb4301721f21a1ab5f5f0d233591bff87384e21357f681688,PodSandboxId:ff8260447e181fa4ff44a2bd7a9d5f2f0944233b75334498e6a52e3bcb460218,Metadata:&ContainerMetadata{Name:etcd-restore-operator,Attempt:0,},Image:&ImageSpec{Image:9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628812496407685607,Labels:map[string]string{io.kubernetes.container.name: etcd-restore-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-c5jdd,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: a251dd29-51a3-45fd-aa38-d910126913ea,},Annotations:map[string]string{io.kubernetes.container.hash: 6a3106c4,io.kubernetes.container.restartCoun
t: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d04086fbc6f40fc45bf4ec039228b8bf21b6aee8665969cb4eca57451130a7b,PodSandboxId:ff8260447e181fa4ff44a2bd7a9d5f2f0944233b75334498e6a52e3bcb460218,Metadata:&ContainerMetadata{Name:etcd-backup-operator,Attempt:0,},Image:&ImageSpec{Image:9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628812495914374041,Labels:map[string]string{io.kubernetes.container.name: etcd-backup-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-c5jdd,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: a251dd29-51a3-45fd-aa38-d910126913ea,},Annotations:map[string]string{io.kubernetes.container.hash: e9142ec4,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ccfe0fabcfc227a5f5d8af68a480147d136b0325a3171fae86f0500ead9e2e7,PodSandboxId:ff8260447e181fa4ff44a2bd7a9d5f2f0944233b75334498e6a52e3bcb460218,Metadata:&ContainerMetadata{Name:etcd-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628812495137946973,Labels:map[string]string{io.kubernetes.container.name: etcd-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-c5jdd,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: a251dd29-51a3-45fd-aa38-d910126913ea,},Annotations:map[string]string{io.kubernetes.container.hash: a4475c14,io.kubernetes.conta
iner.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26ebec6a37c253bc83dea77a39b63637ac39751a5a7db0f933e5ff9145716f62,PodSandboxId:6cfdc1bddbde4925f23c205676b88612dcba769396b29a83aaf0c37cc10779d8,Metadata:&ContainerMetadata{Name:private-image-eu,Attempt:0,},Image:&ImageSpec{Image:europe-west1-docker.pkg.dev/k8s-minikube/test-artifacts-eu/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,Annotations:map[string]string{},},ImageRef:europe-west1-docker.pkg.dev/k8s-minikube/test-artifacts-eu/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,State:CONTAINER_RUNNING,CreatedAt:1628812486994333072,Labels:map[string]string{io.kubernetes.container.name: private-image-eu,io.kubernetes.pod.name: private-image-eu-5956d58f9f-qhzdr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: abed0c04-d2a3-4c5
0-899b-16a1db4432a3,},Annotations:map[string]string{io.kubernetes.container.hash: 75354c1d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:589183e860536b37198eebf41ddd7769db7d1722c08aa9cae51ca788603fe806,PodSandboxId:846a34bd0f2a21c414651bc1fb00563e9eb01b1e8b0f7b9c3ef0c5c23cc13204,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce,State:CONTAINER_RUNNING,CreatedAt:1628812467777458064,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7419f352-61f4-4cd1-b1a1-5f2622b3f293,},Annotation
s:map[string]string{io.kubernetes.container.hash: 439f59e2,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:878beb2424135ce004d37826518b3ced2d8a28b834227a0530524c2602b7a393,PodSandboxId:c3feaa6d0f2117a4a7528982f8956a5ab57388301f49a42ccdba7c9aa99636b0,Metadata:&ContainerMetadata{Name:private-image,Attempt:0,},Image:&ImageSpec{Image:us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,Annotations:map[string]string{},},ImageRef:us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,State:CONTAINER_RUNNING,CreatedAt:1628812463068710000,Labels:map[string]string{io.kubernetes.container.name: private-image,io.kubernete
s.pod.name: private-image-7ff9c8c74f-vrcqj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4c8cd4c-3a3d-4c12-8863-a7209f060be2,},Annotations:map[string]string{io.kubernetes.container.hash: c7e432c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfd76cbf91426603b0d7b370f45186bfc7349cb6acd625e74b50f2c97d6d5462,PodSandboxId:4a53b00e3fd62a312a1ece65ba5f7032a766e056449b25d85439b13677c59f00,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,State:CONTAINER_RUNNING,CreatedAt:1628812434394331053,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernet
es.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 32b4ebc6-1950-4af1-9d15-bd7d8ddbe52c,},Annotations:map[string]string{io.kubernetes.container.hash: 85f81115,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79637d5190536c29c355e710c67cd16200301bac18136d33c0a8a5e5f866b76f,PodSandboxId:13b1d5e59e433a1225a5f85815fe59a76e6ff63588e9f368688dc2351f0631b5,Metadata:&ContainerMetadata{Name:packageserver,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628812375743364158,Labels:map[string]string{io.kubernetes.container.name: packageserver,io.kubernet
es.pod.name: packageserver-6488c6c757-lmf89,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 3f0a6c54-7925-430d-ab5f-a01d61a5edcb,},Annotations:map[string]string{io.kubernetes.container.hash: 42aad7cc,io.kubernetes.container.ports: [{\"containerPort\":5443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5e00a6b90a5271ce54b61377e461e4d88a19fd90dc06863d5a746577334ad1,PodSandboxId:162cbd30544c84d70345af95f654c7514300ee44f63755af89d98438e101376c,Metadata:&ContainerMetadata{Name:packageserver,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUN
NING,CreatedAt:1628812375330470665,Labels:map[string]string{io.kubernetes.container.name: packageserver,io.kubernetes.pod.name: packageserver-6488c6c757-ncn95,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: d1fe5f4e-801b-43bb-8d91-f98278b9b7e0,},Annotations:map[string]string{io.kubernetes.container.hash: 1ed90569,io.kubernetes.container.ports: [{\"containerPort\":5443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83fae35060ce317927df9c26e190d57130d6bf5ca320082e659c93d859772586,PodSandboxId:6a3b5b8a7e1c5939406c4754ff6db153de573b3ca6860cdad0a07cb06c3ff361,Metadata:&ContainerMetadata{Name:registry-server,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0,Annotations:map[string]
string{},},ImageRef:quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0,State:CONTAINER_RUNNING,CreatedAt:1628812374485600354,Labels:map[string]string{io.kubernetes.container.name: registry-server,io.kubernetes.pod.name: operatorhubio-catalog-lsjpz,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 2088d6f1-a488-4bbd-be2e-5a3f6482ffd3,},Annotations:map[string]string{io.kubernetes.container.hash: a6fa37,io.kubernetes.container.ports: [{\"name\":\"grpc\",\"containerPort\":50051,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d73d646852fe4aa976ce0bd39287080be283652ecd5611a35d72cd03dfa3b2,PodSandboxId:fba557957f59a84847ff0a75ecba6ae1dad057e8e2caabae2f5f455c1ff052de,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{I
mage:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1628812331035309077,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-6dngl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4ba67514-8f40-41b0-ad6f-3db817ed7f3e,},Annotations:map[string]string{io.kubernetes.container.hash: 424b9e3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c992643e54f96cd3ce9001ef74b8bbf6dda8febaaebe4496e8614586458fb9f8,PodSandboxId:0bba02f5b4bb2245a2fdd76ea6d06a9740e7c8570d360ea5636fa4644152e896,Metadata:&ContainerMetadata{Name:cre
ate,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1628812330084737828,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jswnd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 35929738-f6a6-4263-91dd-5e83b165fc66,},Annotations:map[string]string{io.kubernetes.container.hash: 784a9392,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47de47c596fe3e327aa94468d4ee52f31f4cc9eed74ced1c79468444052c72b3,PodSandboxId:72c70654465a7b64a9a596f260c12513b66f36d39aaf6c5338f070a68af7ee27,
Metadata:&ContainerMetadata{Name:olm-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628812324661923427,Labels:map[string]string{io.kubernetes.container.name: olm-operator,io.kubernetes.pod.name: olm-operator-859c88c96-vsdlp,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 70ea73ed-8703-41ef-8b44-4383788352e1,},Annotations:map[string]string{io.kubernetes.container.hash: 456d2455,io.kubernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":8081,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminatio
nGracePeriod: 30,},},&Container{Id:3cc66b29ae716721d7cfc70ab42c7883f2440ff9da5a55396d21d971ec76e878,PodSandboxId:c3dbb81d724b8247891b3fd6a1b98092c6ddc807e400c3e1e72fde8a5842ecea,Metadata:&ContainerMetadata{Name:catalog-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628812324027304855,Labels:map[string]string{io.kubernetes.container.name: catalog-operator,io.kubernetes.pod.name: catalog-operator-75d496484d-x22rr,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 9ef3c8dc-b0a8-41e1-a9be-ad8ff5a7cd49,},Annotations:map[string]string{io.kubernetes.container.hash: 22ab0afe,io.kubernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":8081,\"protocol\":\"TCP\"}],io.kubernetes.cont
ainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b83db69e2b1957cd9c27a10214e1e150f3ebf424abed5d0d8af9e975dd330c37,PodSandboxId:3523d49ad00e35dbcb993902231434cb2254feae748306a826d2e9ecf43d6807,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628812299895268336,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c90fe00-a4ae-4fe6-bf9e-0ec3765e7f67,},Annotations:map[string]string{io.kubernetes.container.hash: 9d6a3819,io
.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:144d0dbab7b4bb72eeb618430d51609f43dcea9e62c5c5c6cfc53589202468dc,PodSandboxId:4bcf0afaab52b40716bd095cd300edd339f0fe0eca9524deb4f9189e4f3719e8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628812297745577752,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d2mcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 042d754e-0e03-4d55-b93b-ef111d733617,},Annotations:map[string]string{io.kubernetes.container.hash: 61224e33,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:252ee0cdeea822fe191a791748af57151814b3174fe9d7588e8b35af8180baa5,PodSandboxId:9a14ebd4bb39be8b1a2c719112f54e151d667bc4600adcc52bdb2e171863be40,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628812296301812126,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-nrmhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d29d822d-395c-41b5-b0e2-da279389bbec,},Annotations:map[string]string{io.kubernetes.container.hash: e8530073,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:832c0a74513bf5368b3dddeaa34591b32693c3d659a90e894db641d30945a94e,PodSandboxId:c8eddf71fb45c8b895a0205e186713a3cbfd052751758142deae5200187b79c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628812273986650559,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-20210812235029-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 1641765115f1c1317b2417660238322a,},Annotations:map[string]string{io.kubernetes.container.hash: 17a239e8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b610af4737f78f6e81098816b4d4bb32edad7e272eff91eedab1e1228915246,PodSandboxId:d414c06f351ebe53bbd8956c461fdef373a43169401904dfb8707770ac30753a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628812273685167504,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-20210812235029-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 002
18d7ecf1ada3625b3f1636c3a79de,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df7df43ae185dc0ace9a7e00d73739e38efa52af36ac1dd82116971a026fb789,PodSandboxId:8cfd5098d1c279eb09d72d1589c1c302e7a8df133b2650335202de95d57f8730,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628812273619122504,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-20210812235029-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914178d72
dc0b528da5ffaf8d8c376af,},Annotations:map[string]string{io.kubernetes.container.hash: 9cfa5d4f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d1f01676b8baf312ddbcf8715f52aa8544b29bb3b38375a6f3fa993969901fb,PodSandboxId:53e33974fb6735e31179ccc48bba01c3602647c854a1ba52ad1486f2dfc26a5e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628812273106325676,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-20210812235029-820289,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 143e03afe6ebd93d6ba969540d8c9889,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=74593da7-c358-42d6-b6d5-051dc408168a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 12 23:58:19 addons-20210812235029-820289 crio[2077]: time="2021-08-12 23:58:19.917523204Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=dae95e57-02f1-4ca0-bef3-fa0baefc4e8f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 12 23:58:19 addons-20210812235029-820289 crio[2077]: time="2021-08-12 23:58:19.917674666Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=dae95e57-02f1-4ca0-bef3-fa0baefc4e8f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 12 23:58:19 addons-20210812235029-820289 crio[2077]: time="2021-08-12 23:58:19.918338899Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:24ca02020cbbcefbb4301721f21a1ab5f5f0d233591bff87384e21357f681688,PodSandboxId:ff8260447e181fa4ff44a2bd7a9d5f2f0944233b75334498e6a52e3bcb460218,Metadata:&ContainerMetadata{Name:etcd-restore-operator,Attempt:0,},Image:&ImageSpec{Image:9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628812496407685607,Labels:map[string]string{io.kubernetes.container.name: etcd-restore-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-c5jdd,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: a251dd29-51a3-45fd-aa38-d910126913ea,},Annotations:map[string]string{io.kubernetes.container.hash: 6a3106c4,io.kubernetes.container.restartCoun
t: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d04086fbc6f40fc45bf4ec039228b8bf21b6aee8665969cb4eca57451130a7b,PodSandboxId:ff8260447e181fa4ff44a2bd7a9d5f2f0944233b75334498e6a52e3bcb460218,Metadata:&ContainerMetadata{Name:etcd-backup-operator,Attempt:0,},Image:&ImageSpec{Image:9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628812495914374041,Labels:map[string]string{io.kubernetes.container.name: etcd-backup-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-c5jdd,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: a251dd29-51a3-45fd-aa38-d910126913ea,},Annotations:map[string]string{io.kubernetes.container.hash: e9142ec4,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ccfe0fabcfc227a5f5d8af68a480147d136b0325a3171fae86f0500ead9e2e7,PodSandboxId:ff8260447e181fa4ff44a2bd7a9d5f2f0944233b75334498e6a52e3bcb460218,Metadata:&ContainerMetadata{Name:etcd-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628812495137946973,Labels:map[string]string{io.kubernetes.container.name: etcd-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-c5jdd,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: a251dd29-51a3-45fd-aa38-d910126913ea,},Annotations:map[string]string{io.kubernetes.container.hash: a4475c14,io.kubernetes.conta
iner.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26ebec6a37c253bc83dea77a39b63637ac39751a5a7db0f933e5ff9145716f62,PodSandboxId:6cfdc1bddbde4925f23c205676b88612dcba769396b29a83aaf0c37cc10779d8,Metadata:&ContainerMetadata{Name:private-image-eu,Attempt:0,},Image:&ImageSpec{Image:europe-west1-docker.pkg.dev/k8s-minikube/test-artifacts-eu/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,Annotations:map[string]string{},},ImageRef:europe-west1-docker.pkg.dev/k8s-minikube/test-artifacts-eu/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,State:CONTAINER_RUNNING,CreatedAt:1628812486994333072,Labels:map[string]string{io.kubernetes.container.name: private-image-eu,io.kubernetes.pod.name: private-image-eu-5956d58f9f-qhzdr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: abed0c04-d2a3-4c5
0-899b-16a1db4432a3,},Annotations:map[string]string{io.kubernetes.container.hash: 75354c1d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:589183e860536b37198eebf41ddd7769db7d1722c08aa9cae51ca788603fe806,PodSandboxId:846a34bd0f2a21c414651bc1fb00563e9eb01b1e8b0f7b9c3ef0c5c23cc13204,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce,State:CONTAINER_RUNNING,CreatedAt:1628812467777458064,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7419f352-61f4-4cd1-b1a1-5f2622b3f293,},Annotation
s:map[string]string{io.kubernetes.container.hash: 439f59e2,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:878beb2424135ce004d37826518b3ced2d8a28b834227a0530524c2602b7a393,PodSandboxId:c3feaa6d0f2117a4a7528982f8956a5ab57388301f49a42ccdba7c9aa99636b0,Metadata:&ContainerMetadata{Name:private-image,Attempt:0,},Image:&ImageSpec{Image:us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,Annotations:map[string]string{},},ImageRef:us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,State:CONTAINER_RUNNING,CreatedAt:1628812463068710000,Labels:map[string]string{io.kubernetes.container.name: private-image,io.kubernete
s.pod.name: private-image-7ff9c8c74f-vrcqj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4c8cd4c-3a3d-4c12-8863-a7209f060be2,},Annotations:map[string]string{io.kubernetes.container.hash: c7e432c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfd76cbf91426603b0d7b370f45186bfc7349cb6acd625e74b50f2c97d6d5462,PodSandboxId:4a53b00e3fd62a312a1ece65ba5f7032a766e056449b25d85439b13677c59f00,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,State:CONTAINER_RUNNING,CreatedAt:1628812434394331053,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernet
es.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 32b4ebc6-1950-4af1-9d15-bd7d8ddbe52c,},Annotations:map[string]string{io.kubernetes.container.hash: 85f81115,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79637d5190536c29c355e710c67cd16200301bac18136d33c0a8a5e5f866b76f,PodSandboxId:13b1d5e59e433a1225a5f85815fe59a76e6ff63588e9f368688dc2351f0631b5,Metadata:&ContainerMetadata{Name:packageserver,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628812375743364158,Labels:map[string]string{io.kubernetes.container.name: packageserver,io.kubernet
es.pod.name: packageserver-6488c6c757-lmf89,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 3f0a6c54-7925-430d-ab5f-a01d61a5edcb,},Annotations:map[string]string{io.kubernetes.container.hash: 42aad7cc,io.kubernetes.container.ports: [{\"containerPort\":5443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5e00a6b90a5271ce54b61377e461e4d88a19fd90dc06863d5a746577334ad1,PodSandboxId:162cbd30544c84d70345af95f654c7514300ee44f63755af89d98438e101376c,Metadata:&ContainerMetadata{Name:packageserver,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUN
NING,CreatedAt:1628812375330470665,Labels:map[string]string{io.kubernetes.container.name: packageserver,io.kubernetes.pod.name: packageserver-6488c6c757-ncn95,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: d1fe5f4e-801b-43bb-8d91-f98278b9b7e0,},Annotations:map[string]string{io.kubernetes.container.hash: 1ed90569,io.kubernetes.container.ports: [{\"containerPort\":5443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83fae35060ce317927df9c26e190d57130d6bf5ca320082e659c93d859772586,PodSandboxId:6a3b5b8a7e1c5939406c4754ff6db153de573b3ca6860cdad0a07cb06c3ff361,Metadata:&ContainerMetadata{Name:registry-server,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0,Annotations:map[string]
string{},},ImageRef:quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0,State:CONTAINER_RUNNING,CreatedAt:1628812374485600354,Labels:map[string]string{io.kubernetes.container.name: registry-server,io.kubernetes.pod.name: operatorhubio-catalog-lsjpz,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 2088d6f1-a488-4bbd-be2e-5a3f6482ffd3,},Annotations:map[string]string{io.kubernetes.container.hash: a6fa37,io.kubernetes.container.ports: [{\"name\":\"grpc\",\"containerPort\":50051,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d73d646852fe4aa976ce0bd39287080be283652ecd5611a35d72cd03dfa3b2,PodSandboxId:fba557957f59a84847ff0a75ecba6ae1dad057e8e2caabae2f5f455c1ff052de,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{I
mage:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1628812331035309077,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-6dngl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4ba67514-8f40-41b0-ad6f-3db817ed7f3e,},Annotations:map[string]string{io.kubernetes.container.hash: 424b9e3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c992643e54f96cd3ce9001ef74b8bbf6dda8febaaebe4496e8614586458fb9f8,PodSandboxId:0bba02f5b4bb2245a2fdd76ea6d06a9740e7c8570d360ea5636fa4644152e896,Metadata:&ContainerMetadata{Name:cre
ate,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1628812330084737828,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jswnd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 35929738-f6a6-4263-91dd-5e83b165fc66,},Annotations:map[string]string{io.kubernetes.container.hash: 784a9392,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47de47c596fe3e327aa94468d4ee52f31f4cc9eed74ced1c79468444052c72b3,PodSandboxId:72c70654465a7b64a9a596f260c12513b66f36d39aaf6c5338f070a68af7ee27,
Metadata:&ContainerMetadata{Name:olm-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628812324661923427,Labels:map[string]string{io.kubernetes.container.name: olm-operator,io.kubernetes.pod.name: olm-operator-859c88c96-vsdlp,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 70ea73ed-8703-41ef-8b44-4383788352e1,},Annotations:map[string]string{io.kubernetes.container.hash: 456d2455,io.kubernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":8081,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminatio
nGracePeriod: 30,},},&Container{Id:3cc66b29ae716721d7cfc70ab42c7883f2440ff9da5a55396d21d971ec76e878,PodSandboxId:c3dbb81d724b8247891b3fd6a1b98092c6ddc807e400c3e1e72fde8a5842ecea,Metadata:&ContainerMetadata{Name:catalog-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628812324027304855,Labels:map[string]string{io.kubernetes.container.name: catalog-operator,io.kubernetes.pod.name: catalog-operator-75d496484d-x22rr,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 9ef3c8dc-b0a8-41e1-a9be-ad8ff5a7cd49,},Annotations:map[string]string{io.kubernetes.container.hash: 22ab0afe,io.kubernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":8081,\"protocol\":\"TCP\"}],io.kubernetes.cont
ainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b83db69e2b1957cd9c27a10214e1e150f3ebf424abed5d0d8af9e975dd330c37,PodSandboxId:3523d49ad00e35dbcb993902231434cb2254feae748306a826d2e9ecf43d6807,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628812299895268336,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c90fe00-a4ae-4fe6-bf9e-0ec3765e7f67,},Annotations:map[string]string{io.kubernetes.container.hash: 9d6a3819,io
.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:144d0dbab7b4bb72eeb618430d51609f43dcea9e62c5c5c6cfc53589202468dc,PodSandboxId:4bcf0afaab52b40716bd095cd300edd339f0fe0eca9524deb4f9189e4f3719e8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628812297745577752,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d2mcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 042d754e-0e03-4d55-b93b-ef111d733617,},Annotations:map[string]string{io.kubernetes.container.hash: 61224e33,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:252ee0cdeea822fe191a791748af57151814b3174fe9d7588e8b35af8180baa5,PodSandboxId:9a14ebd4bb39be8b1a2c719112f54e151d667bc4600adcc52bdb2e171863be40,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628812296301812126,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-nrmhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d29d822d-395c-41b5-b0e2-da279389bbec,},Annotations:map[string]string{io.kubernetes.container.hash: e8530073,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:832c0a74513bf5368b3dddeaa34591b32693c3d659a90e894db641d30945a94e,PodSandboxId:c8eddf71fb45c8b895a0205e186713a3cbfd052751758142deae5200187b79c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628812273986650559,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-20210812235029-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 1641765115f1c1317b2417660238322a,},Annotations:map[string]string{io.kubernetes.container.hash: 17a239e8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b610af4737f78f6e81098816b4d4bb32edad7e272eff91eedab1e1228915246,PodSandboxId:d414c06f351ebe53bbd8956c461fdef373a43169401904dfb8707770ac30753a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628812273685167504,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-20210812235029-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 002
18d7ecf1ada3625b3f1636c3a79de,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df7df43ae185dc0ace9a7e00d73739e38efa52af36ac1dd82116971a026fb789,PodSandboxId:8cfd5098d1c279eb09d72d1589c1c302e7a8df133b2650335202de95d57f8730,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628812273619122504,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-20210812235029-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914178d72
dc0b528da5ffaf8d8c376af,},Annotations:map[string]string{io.kubernetes.container.hash: 9cfa5d4f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d1f01676b8baf312ddbcf8715f52aa8544b29bb3b38375a6f3fa993969901fb,PodSandboxId:53e33974fb6735e31179ccc48bba01c3602647c854a1ba52ad1486f2dfc26a5e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628812273106325676,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-20210812235029-820289,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 143e03afe6ebd93d6ba969540d8c9889,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=dae95e57-02f1-4ca0-bef3-fa0baefc4e8f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 12 23:58:19 addons-20210812235029-820289 crio[2077]: time="2021-08-12 23:58:19.956319146Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=96f1a8bb-6c89-4539-8c4d-840a32e3d6e8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 12 23:58:19 addons-20210812235029-820289 crio[2077]: time="2021-08-12 23:58:19.956454591Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=96f1a8bb-6c89-4539-8c4d-840a32e3d6e8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 12 23:58:19 addons-20210812235029-820289 crio[2077]: time="2021-08-12 23:58:19.956879178Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:24ca02020cbbcefbb4301721f21a1ab5f5f0d233591bff87384e21357f681688,PodSandboxId:ff8260447e181fa4ff44a2bd7a9d5f2f0944233b75334498e6a52e3bcb460218,Metadata:&ContainerMetadata{Name:etcd-restore-operator,Attempt:0,},Image:&ImageSpec{Image:9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628812496407685607,Labels:map[string]string{io.kubernetes.container.name: etcd-restore-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-c5jdd,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: a251dd29-51a3-45fd-aa38-d910126913ea,},Annotations:map[string]string{io.kubernetes.container.hash: 6a3106c4,io.kubernetes.container.restartCoun
t: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d04086fbc6f40fc45bf4ec039228b8bf21b6aee8665969cb4eca57451130a7b,PodSandboxId:ff8260447e181fa4ff44a2bd7a9d5f2f0944233b75334498e6a52e3bcb460218,Metadata:&ContainerMetadata{Name:etcd-backup-operator,Attempt:0,},Image:&ImageSpec{Image:9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628812495914374041,Labels:map[string]string{io.kubernetes.container.name: etcd-backup-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-c5jdd,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: a251dd29-51a3-45fd-aa38-d910126913ea,},Annotations:map[string]string{io.kubernetes.container.hash: e9142ec4,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ccfe0fabcfc227a5f5d8af68a480147d136b0325a3171fae86f0500ead9e2e7,PodSandboxId:ff8260447e181fa4ff44a2bd7a9d5f2f0944233b75334498e6a52e3bcb460218,Metadata:&ContainerMetadata{Name:etcd-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,Annotations:map[string]string{},},ImageRef:quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b,State:CONTAINER_RUNNING,CreatedAt:1628812495137946973,Labels:map[string]string{io.kubernetes.container.name: etcd-operator,io.kubernetes.pod.name: etcd-operator-85cd4f54cd-c5jdd,io.kubernetes.pod.namespace: my-etcd,io.kubernetes.pod.uid: a251dd29-51a3-45fd-aa38-d910126913ea,},Annotations:map[string]string{io.kubernetes.container.hash: a4475c14,io.kubernetes.conta
iner.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26ebec6a37c253bc83dea77a39b63637ac39751a5a7db0f933e5ff9145716f62,PodSandboxId:6cfdc1bddbde4925f23c205676b88612dcba769396b29a83aaf0c37cc10779d8,Metadata:&ContainerMetadata{Name:private-image-eu,Attempt:0,},Image:&ImageSpec{Image:europe-west1-docker.pkg.dev/k8s-minikube/test-artifacts-eu/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,Annotations:map[string]string{},},ImageRef:europe-west1-docker.pkg.dev/k8s-minikube/test-artifacts-eu/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,State:CONTAINER_RUNNING,CreatedAt:1628812486994333072,Labels:map[string]string{io.kubernetes.container.name: private-image-eu,io.kubernetes.pod.name: private-image-eu-5956d58f9f-qhzdr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: abed0c04-d2a3-4c5
0-899b-16a1db4432a3,},Annotations:map[string]string{io.kubernetes.container.hash: 75354c1d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:589183e860536b37198eebf41ddd7769db7d1722c08aa9cae51ca788603fe806,PodSandboxId:846a34bd0f2a21c414651bc1fb00563e9eb01b1e8b0f7b9c3ef0c5c23cc13204,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce,State:CONTAINER_RUNNING,CreatedAt:1628812467777458064,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7419f352-61f4-4cd1-b1a1-5f2622b3f293,},Annotation
s:map[string]string{io.kubernetes.container.hash: 439f59e2,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:878beb2424135ce004d37826518b3ced2d8a28b834227a0530524c2602b7a393,PodSandboxId:c3feaa6d0f2117a4a7528982f8956a5ab57388301f49a42ccdba7c9aa99636b0,Metadata:&ContainerMetadata{Name:private-image,Attempt:0,},Image:&ImageSpec{Image:us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,Annotations:map[string]string{},},ImageRef:us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8,State:CONTAINER_RUNNING,CreatedAt:1628812463068710000,Labels:map[string]string{io.kubernetes.container.name: private-image,io.kubernete
s.pod.name: private-image-7ff9c8c74f-vrcqj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4c8cd4c-3a3d-4c12-8863-a7209f060be2,},Annotations:map[string]string{io.kubernetes.container.hash: c7e432c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfd76cbf91426603b0d7b370f45186bfc7349cb6acd625e74b50f2c97d6d5462,PodSandboxId:4a53b00e3fd62a312a1ece65ba5f7032a766e056449b25d85439b13677c59f00,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,State:CONTAINER_RUNNING,CreatedAt:1628812434394331053,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernet
es.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 32b4ebc6-1950-4af1-9d15-bd7d8ddbe52c,},Annotations:map[string]string{io.kubernetes.container.hash: 85f81115,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79637d5190536c29c355e710c67cd16200301bac18136d33c0a8a5e5f866b76f,PodSandboxId:13b1d5e59e433a1225a5f85815fe59a76e6ff63588e9f368688dc2351f0631b5,Metadata:&ContainerMetadata{Name:packageserver,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628812375743364158,Labels:map[string]string{io.kubernetes.container.name: packageserver,io.kubernet
es.pod.name: packageserver-6488c6c757-lmf89,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 3f0a6c54-7925-430d-ab5f-a01d61a5edcb,},Annotations:map[string]string{io.kubernetes.container.hash: 42aad7cc,io.kubernetes.container.ports: [{\"containerPort\":5443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5e00a6b90a5271ce54b61377e461e4d88a19fd90dc06863d5a746577334ad1,PodSandboxId:162cbd30544c84d70345af95f654c7514300ee44f63755af89d98438e101376c,Metadata:&ContainerMetadata{Name:packageserver,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUN
NING,CreatedAt:1628812375330470665,Labels:map[string]string{io.kubernetes.container.name: packageserver,io.kubernetes.pod.name: packageserver-6488c6c757-ncn95,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: d1fe5f4e-801b-43bb-8d91-f98278b9b7e0,},Annotations:map[string]string{io.kubernetes.container.hash: 1ed90569,io.kubernetes.container.ports: [{\"containerPort\":5443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83fae35060ce317927df9c26e190d57130d6bf5ca320082e659c93d859772586,PodSandboxId:6a3b5b8a7e1c5939406c4754ff6db153de573b3ca6860cdad0a07cb06c3ff361,Metadata:&ContainerMetadata{Name:registry-server,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0,Annotations:map[string]
string{},},ImageRef:quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0,State:CONTAINER_RUNNING,CreatedAt:1628812374485600354,Labels:map[string]string{io.kubernetes.container.name: registry-server,io.kubernetes.pod.name: operatorhubio-catalog-lsjpz,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 2088d6f1-a488-4bbd-be2e-5a3f6482ffd3,},Annotations:map[string]string{io.kubernetes.container.hash: a6fa37,io.kubernetes.container.ports: [{\"name\":\"grpc\",\"containerPort\":50051,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d73d646852fe4aa976ce0bd39287080be283652ecd5611a35d72cd03dfa3b2,PodSandboxId:fba557957f59a84847ff0a75ecba6ae1dad057e8e2caabae2f5f455c1ff052de,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{I
mage:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1628812331035309077,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-6dngl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4ba67514-8f40-41b0-ad6f-3db817ed7f3e,},Annotations:map[string]string{io.kubernetes.container.hash: 424b9e3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c992643e54f96cd3ce9001ef74b8bbf6dda8febaaebe4496e8614586458fb9f8,PodSandboxId:0bba02f5b4bb2245a2fdd76ea6d06a9740e7c8570d360ea5636fa4644152e896,Metadata:&ContainerMetadata{Name:cre
ate,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1628812330084737828,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jswnd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 35929738-f6a6-4263-91dd-5e83b165fc66,},Annotations:map[string]string{io.kubernetes.container.hash: 784a9392,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47de47c596fe3e327aa94468d4ee52f31f4cc9eed74ced1c79468444052c72b3,PodSandboxId:72c70654465a7b64a9a596f260c12513b66f36d39aaf6c5338f070a68af7ee27,
Metadata:&ContainerMetadata{Name:olm-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628812324661923427,Labels:map[string]string{io.kubernetes.container.name: olm-operator,io.kubernetes.pod.name: olm-operator-859c88c96-vsdlp,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 70ea73ed-8703-41ef-8b44-4383788352e1,},Annotations:map[string]string{io.kubernetes.container.hash: 456d2455,io.kubernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":8081,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminatio
nGracePeriod: 30,},},&Container{Id:3cc66b29ae716721d7cfc70ab42c7883f2440ff9da5a55396d21d971ec76e878,PodSandboxId:c3dbb81d724b8247891b3fd6a1b98092c6ddc807e400c3e1e72fde8a5842ecea,Metadata:&ContainerMetadata{Name:catalog-operator,Attempt:0,},Image:&ImageSpec{Image:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,Annotations:map[string]string{},},ImageRef:quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607,State:CONTAINER_RUNNING,CreatedAt:1628812324027304855,Labels:map[string]string{io.kubernetes.container.name: catalog-operator,io.kubernetes.pod.name: catalog-operator-75d496484d-x22rr,io.kubernetes.pod.namespace: olm,io.kubernetes.pod.uid: 9ef3c8dc-b0a8-41e1-a9be-ad8ff5a7cd49,},Annotations:map[string]string{io.kubernetes.container.hash: 22ab0afe,io.kubernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":8081,\"protocol\":\"TCP\"}],io.kubernetes.cont
ainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b83db69e2b1957cd9c27a10214e1e150f3ebf424abed5d0d8af9e975dd330c37,PodSandboxId:3523d49ad00e35dbcb993902231434cb2254feae748306a826d2e9ecf43d6807,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628812299895268336,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c90fe00-a4ae-4fe6-bf9e-0ec3765e7f67,},Annotations:map[string]string{io.kubernetes.container.hash: 9d6a3819,io
.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:144d0dbab7b4bb72eeb618430d51609f43dcea9e62c5c5c6cfc53589202468dc,PodSandboxId:4bcf0afaab52b40716bd095cd300edd339f0fe0eca9524deb4f9189e4f3719e8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628812297745577752,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d2mcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 042d754e-0e03-4d55-b93b-ef111d733617,},Annotations:map[string]string{io.kubernetes.container.hash: 61224e33,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:252ee0cdeea822fe191a791748af57151814b3174fe9d7588e8b35af8180baa5,PodSandboxId:9a14ebd4bb39be8b1a2c719112f54e151d667bc4600adcc52bdb2e171863be40,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628812296301812126,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-nrmhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d29d822d-395c-41b5-b0e2-da279389bbec,},Annotations:map[string]string{io.kubernetes.container.hash: e8530073,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:832c0a74513bf5368b3dddeaa34591b32693c3d659a90e894db641d30945a94e,PodSandboxId:c8eddf71fb45c8b895a0205e186713a3cbfd052751758142deae5200187b79c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628812273986650559,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-20210812235029-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 1641765115f1c1317b2417660238322a,},Annotations:map[string]string{io.kubernetes.container.hash: 17a239e8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b610af4737f78f6e81098816b4d4bb32edad7e272eff91eedab1e1228915246,PodSandboxId:d414c06f351ebe53bbd8956c461fdef373a43169401904dfb8707770ac30753a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628812273685167504,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-20210812235029-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 002
18d7ecf1ada3625b3f1636c3a79de,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df7df43ae185dc0ace9a7e00d73739e38efa52af36ac1dd82116971a026fb789,PodSandboxId:8cfd5098d1c279eb09d72d1589c1c302e7a8df133b2650335202de95d57f8730,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628812273619122504,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-20210812235029-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914178d72
dc0b528da5ffaf8d8c376af,},Annotations:map[string]string{io.kubernetes.container.hash: 9cfa5d4f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d1f01676b8baf312ddbcf8715f52aa8544b29bb3b38375a6f3fa993969901fb,PodSandboxId:53e33974fb6735e31179ccc48bba01c3602647c854a1ba52ad1486f2dfc26a5e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628812273106325676,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-20210812235029-820289,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 143e03afe6ebd93d6ba969540d8c9889,},Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=96f1a8bb-6c89-4539-8c4d-840a32e3d6e8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                                           CREATED             STATE               NAME                      ATTEMPT             POD ID
	24ca02020cbbc       9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5                                                                                3 minutes ago       Running             etcd-restore-operator     0                   ff8260447e181
	4d04086fbc6f4       9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5                                                                                3 minutes ago       Running             etcd-backup-operator      0                   ff8260447e181
	0ccfe0fabcfc2       quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b                                            3 minutes ago       Running             etcd-operator             0                   ff8260447e181
	26ebec6a37c25       europe-west1-docker.pkg.dev/k8s-minikube/test-artifacts-eu/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8   3 minutes ago       Running             private-image-eu          0                   6cfdc1bddbde4
	589183e860536       docker.io/library/nginx@sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce                                                 3 minutes ago       Running             nginx                     0                   846a34bd0f2a2
	878beb2424135       us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8                3 minutes ago       Running             private-image             0                   c3feaa6d0f211
	cfd76cbf91426       docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                               4 minutes ago       Running             busybox                   0                   4a53b00e3fd62
	79637d5190536       quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607                                          5 minutes ago       Running             packageserver             0                   13b1d5e59e433
	bb5e00a6b90a5       quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607                                          5 minutes ago       Running             packageserver             0                   162cbd30544c8
	83fae35060ce3       quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0                 5 minutes ago       Running             registry-server           0                   6a3b5b8a7e1c5
	a9d73d646852f       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6                                  6 minutes ago       Exited              patch                     0                   fba557957f59a
	c992643e54f96       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6                                  6 minutes ago       Exited              create                    0                   0bba02f5b4bb2
	47de47c596fe3       quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607                                          6 minutes ago       Running             olm-operator              0                   72c70654465a7
	3cc66b29ae716       quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607                                          6 minutes ago       Running             catalog-operator          0                   c3dbb81d724b8
	b83db69e2b195       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                                6 minutes ago       Running             storage-provisioner       0                   3523d49ad00e3
	144d0dbab7b4b       adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92                                                                                6 minutes ago       Running             kube-proxy                0                   4bcf0afaab52b
	252ee0cdeea82       296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899                                                                                6 minutes ago       Running             coredns                   0                   9a14ebd4bb39b
	832c0a74513bf       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934                                                                                7 minutes ago       Running             etcd                      0                   c8eddf71fb45c
	9b610af4737f7       6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a                                                                                7 minutes ago       Running             kube-scheduler            0                   d414c06f351eb
	df7df43ae185d       3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80                                                                                7 minutes ago       Running             kube-apiserver            0                   8cfd5098d1c27
	4d1f01676b8ba       bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9                                                                                7 minutes ago       Running             kube-controller-manager   0                   53e33974fb673
	
	* 
	* ==> coredns [252ee0cdeea822fe191a791748af57151814b3174fe9d7588e8b35af8180baa5] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = 8f51b271a18f2ce6fcaee5f1cfda3ed0
	[INFO] Reloading complete
	E0812 23:51:36.460105       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0812 23:51:36.461836       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0812 23:51:36.461894       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0812 23:51:37.616792       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0812 23:51:37.797724       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0812 23:51:38.017366       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> describe nodes <==
	* Name:               addons-20210812235029-820289
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-20210812235029-820289
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=dc1c3ca26e9449ce488a773126b8450402c94a19
	                    minikube.k8s.io/name=addons-20210812235029-820289
	                    minikube.k8s.io/updated_at=2021_08_12T23_51_22_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-20210812235029-820289
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Aug 2021 23:51:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-20210812235029-820289
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Aug 2021 23:58:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Aug 2021 23:55:20 +0000   Thu, 12 Aug 2021 23:51:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Aug 2021 23:55:20 +0000   Thu, 12 Aug 2021 23:51:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Aug 2021 23:55:20 +0000   Thu, 12 Aug 2021 23:51:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Aug 2021 23:55:20 +0000   Thu, 12 Aug 2021 23:51:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.112
	  Hostname:    addons-20210812235029-820289
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3935016Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3935016Ki
	  pods:               110
	System Info:
	  Machine ID:                 452087aa60c54333a2af7a2776c247ba
	  System UUID:                452087aa-60c5-4333-a2af-7a2776c247ba
	  Boot ID:                    6f3edfe9-687d-42b8-ba31-153685900861
	  Kernel Version:             4.19.182
	  OS Image:                   Buildroot 2020.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.20.2
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (17 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m29s
	  default                     nginx                                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m14s
	  default                     private-image-7ff9c8c74f-vrcqj                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  default                     private-image-eu-5956d58f9f-qhzdr                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m52s
	  kube-system                 coredns-558bd4d5db-nrmhx                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     6m45s
	  kube-system                 etcd-addons-20210812235029-820289                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m52s
	  kube-system                 kube-apiserver-addons-20210812235029-820289             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m52s
	  kube-system                 kube-controller-manager-addons-20210812235029-820289    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m52s
	  kube-system                 kube-proxy-d2mcw                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m46s
	  kube-system                 kube-scheduler-addons-20210812235029-820289             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m52s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m42s
	  my-etcd                     etcd-operator-85cd4f54cd-c5jdd                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m41s
	  olm                         catalog-operator-75d496484d-x22rr                       10m (0%!)(MISSING)      0 (0%!)(MISSING)      80Mi (2%!)(MISSING)        0 (0%!)(MISSING)         6m35s
	  olm                         olm-operator-859c88c96-vsdlp                            10m (0%!)(MISSING)      0 (0%!)(MISSING)      160Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m35s
	  olm                         operatorhubio-catalog-lsjpz                             10m (0%!)(MISSING)      0 (0%!)(MISSING)      50Mi (1%!)(MISSING)        0 (0%!)(MISSING)         6m16s
	  olm                         packageserver-6488c6c757-lmf89                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m14s
	  olm                         packageserver-6488c6c757-ncn95                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                780m (39%!)(MISSING)   0 (0%!)(MISSING)
	  memory             460Mi (11%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                   From        Message
	  ----    ------                   ----                  ----        -------
	  Normal  Starting                 7m10s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m9s (x5 over 7m10s)  kubelet     Node addons-20210812235029-820289 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m9s (x4 over 7m10s)  kubelet     Node addons-20210812235029-820289 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m9s (x4 over 7m10s)  kubelet     Node addons-20210812235029-820289 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m9s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 6m53s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m53s                 kubelet     Node addons-20210812235029-820289 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m53s                 kubelet     Node addons-20210812235029-820289 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m53s                 kubelet     Node addons-20210812235029-820289 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m52s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m52s                 kubelet     Node addons-20210812235029-820289 status is now: NodeReady
	  Normal  Starting                 6m42s                 kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +5.012255] kauditd_printk_skb: 53 callbacks suppressed
	[ +17.064106] kauditd_printk_skb: 62 callbacks suppressed
	[ +16.241899] kauditd_printk_skb: 8 callbacks suppressed
	[  +3.369211] NFSD: Unable to end grace period: -110
	[  +9.088043] kauditd_printk_skb: 2 callbacks suppressed
	[Aug12 23:53] kauditd_printk_skb: 14 callbacks suppressed
	[  +8.739801] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.510000] kauditd_printk_skb: 2 callbacks suppressed
	[ +16.099078] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.106043] kauditd_printk_skb: 32 callbacks suppressed
	[  +6.203893] kauditd_printk_skb: 53 callbacks suppressed
	[  +6.015495] kauditd_printk_skb: 11 callbacks suppressed
	[Aug12 23:54] kauditd_printk_skb: 41 callbacks suppressed
	[  +8.645249] kauditd_printk_skb: 128 callbacks suppressed
	[  +5.489321] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.998192] kauditd_printk_skb: 14 callbacks suppressed
	[  +7.085340] kauditd_printk_skb: 5 callbacks suppressed
	[ +10.851767] kauditd_printk_skb: 11 callbacks suppressed
	[  +9.987636] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.824673] kauditd_printk_skb: 14 callbacks suppressed
	[Aug12 23:55] kauditd_printk_skb: 59 callbacks suppressed
	[  +5.009567] kauditd_printk_skb: 74 callbacks suppressed
	[  +5.762465] kauditd_printk_skb: 62 callbacks suppressed
	[Aug12 23:57] kauditd_printk_skb: 38 callbacks suppressed
	[Aug12 23:58] kauditd_printk_skb: 2 callbacks suppressed
	
	* 
	* ==> etcd [0ccfe0fabcfc227a5f5d8af68a480147d136b0325a3171fae86f0500ead9e2e7] <==
	* time="2021-08-12T23:54:55Z" level=info msg="etcd-operator Version: 0.9.4"
	time="2021-08-12T23:54:55Z" level=info msg="Git SHA: c8a1c64"
	time="2021-08-12T23:54:55Z" level=info msg="Go Version: go1.11.5"
	time="2021-08-12T23:54:55Z" level=info msg="Go OS/Arch: linux/amd64"
	E0812 23:54:55.279198       1 event.go:259] Could not construct reference to: '&v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"etcd-operator", GenerateName:"", Namespace:"my-etcd", SelfLink:"", UID:"fde12f45-927a-4468-9ea5-2acd3fb951c2", ResourceVersion:"1887", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764409295, loc:(*time.Location)(0x20d4640)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"etcd-operator-85cd4f54cd-c5jdd\",\"leaseDurationSeconds\":15,\"acquireTime\":\"2021-08-12T23:54:55Z\",\"renewTime\":\"2021-08-12T23:54:55Z\",\"leaderTransitions\":0}"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Subsets:[]v1.EndpointSubset(nil)}' due to: 'selfLink was empty, can't make reference'. Will not r
eport event: 'Normal' 'LeaderElection' 'etcd-operator-85cd4f54cd-c5jdd became leader'
	
	* 
	* ==> etcd [24ca02020cbbcefbb4301721f21a1ab5f5f0d233591bff87384e21357f681688] <==
	* time="2021-08-12T23:54:56Z" level=info msg="Go Version: go1.11.5"
	time="2021-08-12T23:54:56Z" level=info msg="Go OS/Arch: linux/amd64"
	time="2021-08-12T23:54:56Z" level=info msg="etcd-restore-operator Version: 0.9.4"
	time="2021-08-12T23:54:56Z" level=info msg="Git SHA: c8a1c64"
	E0812 23:54:56.763955       1 event.go:259] Could not construct reference to: '&v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"etcd-restore-operator", GenerateName:"", Namespace:"my-etcd", SelfLink:"", UID:"a73ce01f-3b8f-4ff0-95f1-5cbc9638ca2d", ResourceVersion:"1922", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764409296, loc:(*time.Location)(0x24e11a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"etcd-operator-alm-owned"}, Annotations:map[string]string{"endpoints.kubernetes.io/last-change-trigger-time":"2021-08-12T23:54:56Z", "control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"etcd-operator-85cd4f54cd-c5jdd\",\"leaseDurationSeconds\":15,\"acquireTime\":\"2021-08-12T23:54:56Z\",\"renewTime\":\"2021-08-12T23:54:56Z\",\"leaderTransitions\":1}"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), Cl
usterName:""}, Subsets:[]v1.EndpointSubset(nil)}' due to: 'selfLink was empty, can't make reference'. Will not report event: 'Normal' 'LeaderElection' 'etcd-operator-85cd4f54cd-c5jdd became leader'
	time="2021-08-12T23:54:56Z" level=info msg="starting restore controller" pkg=controller
	time="2021-08-12T23:54:56Z" level=info msg="listening on 0.0.0.0:19999"
	
	* 
	* ==> etcd [4d04086fbc6f40fc45bf4ec039228b8bf21b6aee8665969cb4eca57451130a7b] <==
	* time="2021-08-12T23:54:56Z" level=info msg="Go Version: go1.11.5"
	time="2021-08-12T23:54:56Z" level=info msg="Go OS/Arch: linux/amd64"
	time="2021-08-12T23:54:56Z" level=info msg="etcd-backup-operator Version: 0.9.4"
	time="2021-08-12T23:54:56Z" level=info msg="Git SHA: c8a1c64"
	E0812 23:54:56.203093       1 event.go:259] Could not construct reference to: '&v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"etcd-backup-operator", GenerateName:"", Namespace:"my-etcd", SelfLink:"", UID:"be71e947-d212-4127-86c7-3d5325a16a7c", ResourceVersion:"1914", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764409296, loc:(*time.Location)(0x25824c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"etcd-operator-85cd4f54cd-c5jdd\",\"leaseDurationSeconds\":15,\"acquireTime\":\"2021-08-12T23:54:56Z\",\"renewTime\":\"2021-08-12T23:54:56Z\",\"leaderTransitions\":0}"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Subsets:[]v1.EndpointSubset(nil)}' due to: 'selfLink was empty, can't make reference'. Wil
l not report event: 'Normal' 'LeaderElection' 'etcd-operator-85cd4f54cd-c5jdd became leader'
	time="2021-08-12T23:54:56Z" level=info msg="starting backup controller" pkg=controller
	
	* 
	* ==> etcd [832c0a74513bf5368b3dddeaa34591b32693c3d659a90e894db641d30945a94e] <==
	* 2021-08-12 23:54:37.968855 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/clusterserviceversions/my-etcd/etcdoperator.v0.9.4\" " with result "range_response_count:1 size:21391" took too long (126.659389ms) to execute
	2021-08-12 23:54:39.529627 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-12 23:54:49.526791 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-12 23:54:54.413887 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/clusterserviceversions/my-etcd/etcdoperator.v0.9.4\" " with result "range_response_count:1 size:22177" took too long (333.401308ms) to execute
	2021-08-12 23:54:59.527095 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-12 23:55:09.527655 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-12 23:55:19.527246 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-12 23:55:29.527688 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-12 23:55:39.526819 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-12 23:55:49.527121 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-12 23:55:59.527856 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-12 23:56:09.527411 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-12 23:56:19.527255 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-12 23:56:29.527260 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-12 23:56:39.527435 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-12 23:56:49.526186 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-12 23:56:59.527637 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-12 23:57:09.526650 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-12 23:57:19.526764 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-12 23:57:29.527539 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-12 23:57:39.526925 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-12 23:57:49.527432 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-12 23:57:59.536259 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-12 23:58:09.526632 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-12 23:58:19.527976 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  23:58:20 up 7 min,  0 users,  load average: 1.07, 2.77, 1.70
	Linux addons-20210812235029-820289 4.19.182 #1 SMP Fri Aug 6 09:11:32 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [df7df43ae185dc0ace9a7e00d73739e38efa52af36ac1dd82116971a026fb789] <==
	* Trace[609198561]: ---"Object stored in database" 600ms (23:54:00.533)
	Trace[609198561]: [601.244209ms] [601.244209ms] END
	I0812 23:54:48.108364       1 controller.go:611] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0812 23:55:05.769639       1 client.go:360] parsed scheme: "passthrough"
	I0812 23:55:05.769850       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0812 23:55:05.769901       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0812 23:55:20.861975       1 cacher.go:148] Terminating all watchers from cacher *unstructured.Unstructured
	W0812 23:55:21.019309       1 cacher.go:148] Terminating all watchers from cacher *unstructured.Unstructured
	W0812 23:55:21.139898       1 cacher.go:148] Terminating all watchers from cacher *unstructured.Unstructured
	I0812 23:55:44.501619       1 client.go:360] parsed scheme: "passthrough"
	I0812 23:55:44.501777       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0812 23:55:44.501793       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0812 23:56:18.617525       1 client.go:360] parsed scheme: "passthrough"
	I0812 23:56:18.617696       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0812 23:56:18.617717       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0812 23:56:50.355134       1 client.go:360] parsed scheme: "passthrough"
	I0812 23:56:50.355287       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0812 23:56:50.355309       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0812 23:57:23.242414       1 client.go:360] parsed scheme: "passthrough"
	I0812 23:57:23.242979       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0812 23:57:23.243155       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0812 23:57:57.338242       1 client.go:360] parsed scheme: "passthrough"
	I0812 23:57:57.338385       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0812 23:57:57.338400       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	E0812 23:57:59.779266       1 authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	* 
	* ==> kube-controller-manager [4d1f01676b8baf312ddbcf8715f52aa8544b29bb3b38375a6f3fa993969901fb] <==
	* E0812 23:55:24.572612       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0812 23:55:24.769769       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0812 23:55:28.315933       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0812 23:55:30.195392       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0812 23:55:30.223968       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0812 23:55:35.090828       1 shared_informer.go:240] Waiting for caches to sync for resource quota
	I0812 23:55:35.091333       1 shared_informer.go:247] Caches are synced for resource quota 
	I0812 23:55:36.511749       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0812 23:55:36.511868       1 shared_informer.go:247] Caches are synced for garbage collector 
	E0812 23:55:39.412329       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0812 23:55:42.530721       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0812 23:55:42.557711       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0812 23:55:59.146179       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0812 23:56:01.713540       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0812 23:56:07.116565       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0812 23:56:31.940674       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0812 23:56:34.821602       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0812 23:56:50.627613       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0812 23:57:16.019664       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0812 23:57:17.940923       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0812 23:57:44.271564       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0812 23:57:55.353755       1 tokens_controller.go:262] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-rwqtv" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	E0812 23:58:08.518303       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0812 23:58:08.654326       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0812 23:58:14.279946       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [144d0dbab7b4bb72eeb618430d51609f43dcea9e62c5c5c6cfc53589202468dc] <==
	* I0812 23:51:38.103083       1 node.go:172] Successfully retrieved node IP: 192.168.39.112
	I0812 23:51:38.103216       1 server_others.go:140] Detected node IP 192.168.39.112
	W0812 23:51:38.103254       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	W0812 23:51:38.171184       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0812 23:51:38.171295       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0812 23:51:38.171311       1 server_others.go:212] Using iptables Proxier.
	I0812 23:51:38.171643       1 server.go:643] Version: v1.21.3
	I0812 23:51:38.172565       1 config.go:315] Starting service config controller
	I0812 23:51:38.172678       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0812 23:51:38.172705       1 config.go:224] Starting endpoint slice config controller
	I0812 23:51:38.172710       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0812 23:51:38.181822       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0812 23:51:38.184896       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0812 23:51:38.273103       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0812 23:51:38.273231       1 shared_informer.go:247] Caches are synced for service config 
	W0812 23:57:12.216527       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	
	* 
	* ==> kube-scheduler [9b610af4737f78f6e81098816b4d4bb32edad7e272eff91eedab1e1228915246] <==
	* E0812 23:51:18.790264       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0812 23:51:18.790485       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0812 23:51:18.790842       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0812 23:51:18.791169       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0812 23:51:18.790860       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0812 23:51:18.790927       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0812 23:51:18.791847       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0812 23:51:18.791847       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0812 23:51:18.791897       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0812 23:51:18.792302       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0812 23:51:18.792253       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0812 23:51:18.792612       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0812 23:51:19.662769       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0812 23:51:19.670168       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0812 23:51:19.735679       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0812 23:51:19.752799       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0812 23:51:19.810872       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0812 23:51:19.857819       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0812 23:51:19.919623       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0812 23:51:19.955338       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0812 23:51:19.968445       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0812 23:51:20.063132       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0812 23:51:20.101893       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0812 23:51:20.110374       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0812 23:51:22.195272       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2021-08-12 23:50:40 UTC, end at Thu 2021-08-12 23:58:20 UTC. --
	Aug 12 23:56:17 addons-20210812235029-820289 kubelet[2805]: I0812 23:56:17.819343    2805 kubelet_pods.go:895] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/private-image-eu-5956d58f9f-qhzdr" secret="" err="secret \"gcp-auth\" not found"
	Aug 12 23:56:26 addons-20210812235029-820289 kubelet[2805]: I0812 23:56:26.817159    2805 kubelet_pods.go:895] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Aug 12 23:56:44 addons-20210812235029-820289 kubelet[2805]: I0812 23:56:44.817203    2805 kubelet_pods.go:895] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/private-image-7ff9c8c74f-vrcqj" secret="" err="secret \"gcp-auth\" not found"
	Aug 12 23:56:44 addons-20210812235029-820289 kubelet[2805]: I0812 23:56:44.817764    2805 kubelet_pods.go:895] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/nginx" secret="" err="secret \"gcp-auth\" not found"
	Aug 12 23:57:20 addons-20210812235029-820289 kubelet[2805]: I0812 23:57:20.816621    2805 kubelet_pods.go:895] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/private-image-eu-5956d58f9f-qhzdr" secret="" err="secret \"gcp-auth\" not found"
	Aug 12 23:57:45 addons-20210812235029-820289 kubelet[2805]: I0812 23:57:45.816501    2805 kubelet_pods.go:895] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Aug 12 23:57:50 addons-20210812235029-820289 kubelet[2805]: E0812 23:57:50.532485    2805 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-59b45fb494-mzqll.169ab47e77edb9e7", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-59b45fb494-mzqll", UID:"3b491ab0-9e3e-4c33-a963-105ede42a913", APIVersion:"v1", ResourceVersion:"545", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stopp
ing container controller", Source:v1.EventSource{Component:"kubelet", Host:"addons-20210812235029-820289"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc03d8b3f9eec4de7, ext:388463295905, loc:(*time.Location)(0x74c3600)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc03d8b3f9eec4de7, ext:388463295905, loc:(*time.Location)(0x74c3600)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-59b45fb494-mzqll.169ab47e77edb9e7" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 12 23:57:51 addons-20210812235029-820289 kubelet[2805]: E0812 23:57:51.064404    2805 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-59b45fb494-mzqll.169ab47e98390eb1", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-59b45fb494-mzqll", UID:"3b491ab0-9e3e-4c33-a963-105ede42a913", APIVersion:"v1", ResourceVersion:"545", FieldPath:"spec.containers{controller}"}, Reason:"Unhealthy", Message:"Liv
eness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"addons-20210812235029-820289"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc03d8b3fc39cd8b1, ext:389005103646, loc:(*time.Location)(0x74c3600)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc03d8b3fc39cd8b1, ext:389005103646, loc:(*time.Location)(0x74c3600)}}, Count:1, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-59b45fb494-mzqll.169ab47e98390eb1" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 12 23:57:51 addons-20210812235029-820289 kubelet[2805]: E0812 23:57:51.066708    2805 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-59b45fb494-mzqll.169ab47e98418f4a", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-59b45fb494-mzqll", UID:"3b491ab0-9e3e-4c33-a963-105ede42a913", APIVersion:"v1", ResourceVersion:"545", FieldPath:"spec.containers{controller}"}, Reason:"Unhealthy", Message:"Rea
diness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"addons-20210812235029-820289"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc03d8b3fc3a5594a, ext:389005660766, loc:(*time.Location)(0x74c3600)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc03d8b3fc3a5594a, ext:389005660766, loc:(*time.Location)(0x74c3600)}}, Count:1, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-59b45fb494-mzqll.169ab47e98418f4a" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 12 23:58:01 addons-20210812235029-820289 kubelet[2805]: E0812 23:58:01.068247    2805 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-59b45fb494-mzqll.169ab47e98418f4a", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-59b45fb494-mzqll", UID:"3b491ab0-9e3e-4c33-a963-105ede42a913", APIVersion:"v1", ResourceVersion:"545", FieldPath:"spec.containers{controller}"}, Reason:"Unhealthy", Message:"Rea
diness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"addons-20210812235029-820289"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc03d8b3fc3a5594a, ext:389005660766, loc:(*time.Location)(0x74c3600)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc03d8b42439e7edc, ext:399005211618, loc:(*time.Location)(0x74c3600)}}, Count:2, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-59b45fb494-mzqll.169ab47e98418f4a" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 12 23:58:01 addons-20210812235029-820289 kubelet[2805]: E0812 23:58:01.074468    2805 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-59b45fb494-mzqll.169ab47e98390eb1", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-59b45fb494-mzqll", UID:"3b491ab0-9e3e-4c33-a963-105ede42a913", APIVersion:"v1", ResourceVersion:"545", FieldPath:"spec.containers{controller}"}, Reason:"Unhealthy", Message:"Liv
eness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"addons-20210812235029-820289"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc03d8b3fc39cd8b1, ext:389005103646, loc:(*time.Location)(0x74c3600)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc03d8b4243a07db2, ext:399005342301, loc:(*time.Location)(0x74c3600)}}, Count:2, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-59b45fb494-mzqll.169ab47e98390eb1" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 12 23:58:01 addons-20210812235029-820289 kubelet[2805]: I0812 23:58:01.820601    2805 kubelet_pods.go:895] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/nginx" secret="" err="secret \"gcp-auth\" not found"
	Aug 12 23:58:02 addons-20210812235029-820289 kubelet[2805]: I0812 23:58:02.580774    2805 scope.go:111] "RemoveContainer" containerID="ddb27afa9d4c9c426952be8464fa220863181190d3b40c26663fc679aa479038"
	Aug 12 23:58:02 addons-20210812235029-820289 kubelet[2805]: I0812 23:58:02.657077    2805 scope.go:111] "RemoveContainer" containerID="ddb27afa9d4c9c426952be8464fa220863181190d3b40c26663fc679aa479038"
	Aug 12 23:58:02 addons-20210812235029-820289 kubelet[2805]: E0812 23:58:02.675493    2805 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddb27afa9d4c9c426952be8464fa220863181190d3b40c26663fc679aa479038\": container with ID starting with ddb27afa9d4c9c426952be8464fa220863181190d3b40c26663fc679aa479038 not found: ID does not exist" containerID="ddb27afa9d4c9c426952be8464fa220863181190d3b40c26663fc679aa479038"
	Aug 12 23:58:02 addons-20210812235029-820289 kubelet[2805]: I0812 23:58:02.675540    2805 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:ddb27afa9d4c9c426952be8464fa220863181190d3b40c26663fc679aa479038} err="failed to get container status \"ddb27afa9d4c9c426952be8464fa220863181190d3b40c26663fc679aa479038\": rpc error: code = NotFound desc = could not find container \"ddb27afa9d4c9c426952be8464fa220863181190d3b40c26663fc679aa479038\": container with ID starting with ddb27afa9d4c9c426952be8464fa220863181190d3b40c26663fc679aa479038 not found: ID does not exist"
	Aug 12 23:58:02 addons-20210812235029-820289 kubelet[2805]: I0812 23:58:02.748517    2805 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rqj7\" (UniqueName: \"kubernetes.io/projected/3b491ab0-9e3e-4c33-a963-105ede42a913-kube-api-access-8rqj7\") pod \"3b491ab0-9e3e-4c33-a963-105ede42a913\" (UID: \"3b491ab0-9e3e-4c33-a963-105ede42a913\") "
	Aug 12 23:58:02 addons-20210812235029-820289 kubelet[2805]: I0812 23:58:02.748579    2805 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3b491ab0-9e3e-4c33-a963-105ede42a913-webhook-cert\") pod \"3b491ab0-9e3e-4c33-a963-105ede42a913\" (UID: \"3b491ab0-9e3e-4c33-a963-105ede42a913\") "
	Aug 12 23:58:02 addons-20210812235029-820289 kubelet[2805]: I0812 23:58:02.763591    2805 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b491ab0-9e3e-4c33-a963-105ede42a913-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "3b491ab0-9e3e-4c33-a963-105ede42a913" (UID: "3b491ab0-9e3e-4c33-a963-105ede42a913"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 12 23:58:02 addons-20210812235029-820289 kubelet[2805]: I0812 23:58:02.765192    2805 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b491ab0-9e3e-4c33-a963-105ede42a913-kube-api-access-8rqj7" (OuterVolumeSpecName: "kube-api-access-8rqj7") pod "3b491ab0-9e3e-4c33-a963-105ede42a913" (UID: "3b491ab0-9e3e-4c33-a963-105ede42a913"). InnerVolumeSpecName "kube-api-access-8rqj7". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 12 23:58:02 addons-20210812235029-820289 kubelet[2805]: I0812 23:58:02.850222    2805 reconciler.go:319] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3b491ab0-9e3e-4c33-a963-105ede42a913-webhook-cert\") on node \"addons-20210812235029-820289\" DevicePath \"\""
	Aug 12 23:58:02 addons-20210812235029-820289 kubelet[2805]: I0812 23:58:02.850354    2805 reconciler.go:319] "Volume detached for volume \"kube-api-access-8rqj7\" (UniqueName: \"kubernetes.io/projected/3b491ab0-9e3e-4c33-a963-105ede42a913-kube-api-access-8rqj7\") on node \"addons-20210812235029-820289\" DevicePath \"\""
	Aug 12 23:58:03 addons-20210812235029-820289 kubelet[2805]: E0812 23:58:03.823237    2805 kuberuntime_container.go:691] "Kill container failed" err="rpc error: code = NotFound desc = could not find container \"ddb27afa9d4c9c426952be8464fa220863181190d3b40c26663fc679aa479038\": container with ID starting with ddb27afa9d4c9c426952be8464fa220863181190d3b40c26663fc679aa479038 not found: ID does not exist" pod="ingress-nginx/ingress-nginx-controller-59b45fb494-mzqll" podUID=3b491ab0-9e3e-4c33-a963-105ede42a913 containerName="controller" containerID={Type:cri-o ID:ddb27afa9d4c9c426952be8464fa220863181190d3b40c26663fc679aa479038}
	Aug 12 23:58:03 addons-20210812235029-820289 kubelet[2805]: E0812 23:58:03.831721    2805 kubelet_pods.go:1288] "Failed killing the pod" err="failed to \"KillContainer\" for \"controller\" with KillContainerError: \"rpc error: code = NotFound desc = could not find container \\\"ddb27afa9d4c9c426952be8464fa220863181190d3b40c26663fc679aa479038\\\": container with ID starting with ddb27afa9d4c9c426952be8464fa220863181190d3b40c26663fc679aa479038 not found: ID does not exist\"" podName="ingress-nginx-controller-59b45fb494-mzqll"
	Aug 12 23:58:05 addons-20210812235029-820289 kubelet[2805]: I0812 23:58:05.817345    2805 kubelet_pods.go:895] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/private-image-7ff9c8c74f-vrcqj" secret="" err="secret \"gcp-auth\" not found"
	
	* 
	* ==> storage-provisioner [b83db69e2b1957cd9c27a10214e1e150f3ebf424abed5d0d8af9e975dd330c37] <==
	* I0812 23:51:40.000422       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0812 23:51:40.022378       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0812 23:51:40.022407       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0812 23:51:40.037951       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0812 23:51:40.038734       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"821b39ba-1258-47b0-9791-549c250e97ea", APIVersion:"v1", ResourceVersion:"482", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-20210812235029-820289_8bdd48bc-1d3c-4ae0-bed5-c8a3d5c4e465 became leader
	I0812 23:51:40.038761       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-20210812235029-820289_8bdd48bc-1d3c-4ae0-bed5-c8a3d5c4e465!
	I0812 23:51:40.141092       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-20210812235029-820289_8bdd48bc-1d3c-4ae0-bed5-c8a3d5c4e465!
	E0812 23:54:58.142384       1 controller.go:1050] claim "98e6657a-5f23-44a5-b05a-918f5760ce1b" in work queue no longer exists
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-20210812235029-820289 -n addons-20210812235029-820289
helpers_test.go:262: (dbg) Run:  kubectl --context addons-20210812235029-820289 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: 
helpers_test.go:273: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context addons-20210812235029-820289 describe pod 
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context addons-20210812235029-820289 describe pod : exit status 1 (48.655321ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context addons-20210812235029-820289 describe pod : exit status 1
--- FAIL: TestAddons/parallel/Ingress (255.94s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (33.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:462: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813000359-820289 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813000359-820289 -- rollout status deployment/busybox
multinode_test.go:467: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-20210813000359-820289 -- rollout status deployment/busybox: (3.857088971s)
multinode_test.go:473: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813000359-820289 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:485: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813000359-820289 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813000359-820289 -- exec busybox-84b6686758-gpb9d -- nslookup kubernetes.io
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813000359-820289 -- exec busybox-84b6686758-p6fb8 -- nslookup kubernetes.io
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-20210813000359-820289 -- exec busybox-84b6686758-p6fb8 -- nslookup kubernetes.io: exit status 1 (5.250147671s)

                                                
                                                
-- stdout --
	Server:    10.96.0.10
	Address 1: 10.96.0.10
	

                                                
                                                
-- /stdout --
** stderr ** 
	nslookup: can't resolve 'kubernetes.io'
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:495: Pod busybox-84b6686758-p6fb8 could not resolve 'kubernetes.io': exit status 1
multinode_test.go:503: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813000359-820289 -- exec busybox-84b6686758-gpb9d -- nslookup kubernetes.default
multinode_test.go:503: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813000359-820289 -- exec busybox-84b6686758-p6fb8 -- nslookup kubernetes.default
multinode_test.go:503: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-20210813000359-820289 -- exec busybox-84b6686758-p6fb8 -- nslookup kubernetes.default: exit status 1 (10.249118029s)

                                                
                                                
-- stdout --
	Server:    10.96.0.10
	Address 1: 10.96.0.10
	

                                                
                                                
-- /stdout --
** stderr ** 
	nslookup: can't resolve 'kubernetes.default'
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:505: Pod busybox-84b6686758-p6fb8 could not resolve 'kubernetes.default': exit status 1
multinode_test.go:511: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813000359-820289 -- exec busybox-84b6686758-gpb9d -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:511: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813000359-820289 -- exec busybox-84b6686758-p6fb8 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:511: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-20210813000359-820289 -- exec busybox-84b6686758-p6fb8 -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (10.261819989s)

                                                
                                                
-- stdout --
	Server:    10.96.0.10
	Address 1: 10.96.0.10
	

                                                
                                                
-- /stdout --
** stderr ** 
	nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:513: Pod busybox-84b6686758-p6fb8 could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-20210813000359-820289 -n multinode-20210813000359-820289
helpers_test.go:245: <<< TestMultiNode/serial/DeployApp2Nodes FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813000359-820289 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210813000359-820289 logs -n 25: (1.346566846s)
helpers_test.go:253: TestMultiNode/serial/DeployApp2Nodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-----------------------------------------|----------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                 Profile                 |   User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|-----------------------------------------|----------|---------|-------------------------------|-------------------------------|
	| -p      | functional-20210812235933-820289                  | functional-20210812235933-820289        | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:02:04 UTC | Fri, 13 Aug 2021 00:02:04 UTC |
	|         | ssh sudo cat                                      |                                         |          |         |                               |                               |
	|         | /usr/share/ca-certificates/820289.pem             |                                         |          |         |                               |                               |
	| -p      | functional-20210812235933-820289                  | functional-20210812235933-820289        | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:02:04 UTC | Fri, 13 Aug 2021 00:02:04 UTC |
	|         | ssh sudo cat                                      |                                         |          |         |                               |                               |
	|         | /etc/ssl/certs/51391683.0                         |                                         |          |         |                               |                               |
	| -p      | functional-20210812235933-820289                  | functional-20210812235933-820289        | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:02:04 UTC | Fri, 13 Aug 2021 00:02:05 UTC |
	|         | ssh sudo cat                                      |                                         |          |         |                               |                               |
	|         | /etc/ssl/certs/8202892.pem                        |                                         |          |         |                               |                               |
	| -p      | functional-20210812235933-820289                  | functional-20210812235933-820289        | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:02:05 UTC | Fri, 13 Aug 2021 00:02:05 UTC |
	|         | ssh sudo cat                                      |                                         |          |         |                               |                               |
	|         | /usr/share/ca-certificates/8202892.pem            |                                         |          |         |                               |                               |
	| -p      | functional-20210812235933-820289                  | functional-20210812235933-820289        | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:02:05 UTC | Fri, 13 Aug 2021 00:02:05 UTC |
	|         | ssh sudo cat                                      |                                         |          |         |                               |                               |
	|         | /etc/ssl/certs/3ec20f2e.0                         |                                         |          |         |                               |                               |
	| -p      | functional-20210812235933-820289                  | functional-20210812235933-820289        | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:02:05 UTC | Fri, 13 Aug 2021 00:02:05 UTC |
	|         | version --short                                   |                                         |          |         |                               |                               |
	| -p      | functional-20210812235933-820289                  | functional-20210812235933-820289        | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:02:05 UTC | Fri, 13 Aug 2021 00:02:06 UTC |
	|         | version -o=json --components                      |                                         |          |         |                               |                               |
	| -p      | functional-20210812235933-820289                  | functional-20210812235933-820289        | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:02:06 UTC | Fri, 13 Aug 2021 00:02:06 UTC |
	|         | update-context --alsologtostderr                  |                                         |          |         |                               |                               |
	|         | -v=2                                              |                                         |          |         |                               |                               |
	| -p      | functional-20210812235933-820289                  | functional-20210812235933-820289        | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:02:06 UTC | Fri, 13 Aug 2021 00:02:06 UTC |
	|         | update-context --alsologtostderr                  |                                         |          |         |                               |                               |
	|         | -v=2                                              |                                         |          |         |                               |                               |
	| -p      | functional-20210812235933-820289                  | functional-20210812235933-820289        | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:02:06 UTC | Fri, 13 Aug 2021 00:02:06 UTC |
	|         | update-context --alsologtostderr                  |                                         |          |         |                               |                               |
	|         | -v=2                                              |                                         |          |         |                               |                               |
	| delete  | -p                                                | functional-20210812235933-820289        | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:02:41 UTC | Fri, 13 Aug 2021 00:02:42 UTC |
	|         | functional-20210812235933-820289                  |                                         |          |         |                               |                               |
	| start   | -p                                                | json-output-20210813000242-820289       | testUser | v1.22.0 | Fri, 13 Aug 2021 00:02:42 UTC | Fri, 13 Aug 2021 00:03:48 UTC |
	|         | json-output-20210813000242-820289                 |                                         |          |         |                               |                               |
	|         | --output=json --user=testUser                     |                                         |          |         |                               |                               |
	|         | --memory=2200 --wait=true                         |                                         |          |         |                               |                               |
	|         | --driver=kvm2                                     |                                         |          |         |                               |                               |
	|         | --container-runtime=crio                          |                                         |          |         |                               |                               |
	| pause   | -p                                                | json-output-20210813000242-820289       | testUser | v1.22.0 | Fri, 13 Aug 2021 00:03:48 UTC | Fri, 13 Aug 2021 00:03:49 UTC |
	|         | json-output-20210813000242-820289                 |                                         |          |         |                               |                               |
	|         | --output=json --user=testUser                     |                                         |          |         |                               |                               |
	| unpause | -p                                                | json-output-20210813000242-820289       | testUser | v1.22.0 | Fri, 13 Aug 2021 00:03:49 UTC | Fri, 13 Aug 2021 00:03:50 UTC |
	|         | json-output-20210813000242-820289                 |                                         |          |         |                               |                               |
	|         | --output=json --user=testUser                     |                                         |          |         |                               |                               |
	| stop    | -p                                                | json-output-20210813000242-820289       | testUser | v1.22.0 | Fri, 13 Aug 2021 00:03:50 UTC | Fri, 13 Aug 2021 00:03:58 UTC |
	|         | json-output-20210813000242-820289                 |                                         |          |         |                               |                               |
	|         | --output=json --user=testUser                     |                                         |          |         |                               |                               |
	| delete  | -p                                                | json-output-20210813000242-820289       | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:03:58 UTC | Fri, 13 Aug 2021 00:03:59 UTC |
	|         | json-output-20210813000242-820289                 |                                         |          |         |                               |                               |
	| delete  | -p                                                | json-output-error-20210813000359-820289 | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:03:59 UTC | Fri, 13 Aug 2021 00:03:59 UTC |
	|         | json-output-error-20210813000359-820289           |                                         |          |         |                               |                               |
	| start   | -p                                                | multinode-20210813000359-820289         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:03:59 UTC | Fri, 13 Aug 2021 00:05:58 UTC |
	|         | multinode-20210813000359-820289                   |                                         |          |         |                               |                               |
	|         | --wait=true --memory=2200                         |                                         |          |         |                               |                               |
	|         | --nodes=2 -v=8                                    |                                         |          |         |                               |                               |
	|         | --alsologtostderr --driver=kvm2                   |                                         |          |         |                               |                               |
	|         |  --container-runtime=crio                         |                                         |          |         |                               |                               |
	| kubectl | -p multinode-20210813000359-820289 -- apply -f    | multinode-20210813000359-820289         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:05:58 UTC | Fri, 13 Aug 2021 00:05:59 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                                         |          |         |                               |                               |
	| kubectl | -p                                                | multinode-20210813000359-820289         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:05:59 UTC | Fri, 13 Aug 2021 00:06:03 UTC |
	|         | multinode-20210813000359-820289                   |                                         |          |         |                               |                               |
	|         | -- rollout status                                 |                                         |          |         |                               |                               |
	|         | deployment/busybox                                |                                         |          |         |                               |                               |
	| kubectl | -p multinode-20210813000359-820289                | multinode-20210813000359-820289         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:06:03 UTC | Fri, 13 Aug 2021 00:06:03 UTC |
	|         | -- get pods -o                                    |                                         |          |         |                               |                               |
	|         | jsonpath='{.items[*].status.podIP}'               |                                         |          |         |                               |                               |
	| kubectl | -p multinode-20210813000359-820289                | multinode-20210813000359-820289         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:06:03 UTC | Fri, 13 Aug 2021 00:06:03 UTC |
	|         | -- get pods -o                                    |                                         |          |         |                               |                               |
	|         | jsonpath='{.items[*].metadata.name}'              |                                         |          |         |                               |                               |
	| kubectl | -p                                                | multinode-20210813000359-820289         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:06:03 UTC | Fri, 13 Aug 2021 00:06:03 UTC |
	|         | multinode-20210813000359-820289                   |                                         |          |         |                               |                               |
	|         | -- exec                                           |                                         |          |         |                               |                               |
	|         | busybox-84b6686758-gpb9d --                       |                                         |          |         |                               |                               |
	|         | nslookup kubernetes.io                            |                                         |          |         |                               |                               |
	| kubectl | -p                                                | multinode-20210813000359-820289         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:06:09 UTC | Fri, 13 Aug 2021 00:06:09 UTC |
	|         | multinode-20210813000359-820289                   |                                         |          |         |                               |                               |
	|         | -- exec                                           |                                         |          |         |                               |                               |
	|         | busybox-84b6686758-gpb9d --                       |                                         |          |         |                               |                               |
	|         | nslookup kubernetes.default                       |                                         |          |         |                               |                               |
	| kubectl | -p multinode-20210813000359-820289                | multinode-20210813000359-820289         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:06:19 UTC | Fri, 13 Aug 2021 00:06:19 UTC |
	|         | -- exec busybox-84b6686758-gpb9d                  |                                         |          |         |                               |                               |
	|         | -- nslookup                                       |                                         |          |         |                               |                               |
	|         | kubernetes.default.svc.cluster.local              |                                         |          |         |                               |                               |
	|---------|---------------------------------------------------|-----------------------------------------|----------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 00:03:59
	Running on machine: debian-jenkins-agent-14
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 00:03:59.496081  826514 out.go:298] Setting OutFile to fd 1 ...
	I0813 00:03:59.496175  826514 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 00:03:59.496185  826514 out.go:311] Setting ErrFile to fd 2...
	I0813 00:03:59.496188  826514 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 00:03:59.496301  826514 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	I0813 00:03:59.496588  826514 out.go:305] Setting JSON to false
	I0813 00:03:59.532184  826514 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":13602,"bootTime":1628799437,"procs":156,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 00:03:59.532432  826514 start.go:121] virtualization: kvm guest
	I0813 00:03:59.535746  826514 out.go:177] * [multinode-20210813000359-820289] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 00:03:59.537206  826514 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0813 00:03:59.535880  826514 notify.go:169] Checking for updates...
	I0813 00:03:59.538704  826514 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 00:03:59.540116  826514 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	I0813 00:03:59.541521  826514 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 00:03:59.541725  826514 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 00:03:59.570289  826514 out.go:177] * Using the kvm2 driver based on user configuration
	I0813 00:03:59.570316  826514 start.go:278] selected driver: kvm2
	I0813 00:03:59.570322  826514 start.go:751] validating driver "kvm2" against <nil>
	I0813 00:03:59.570343  826514 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0813 00:03:59.571390  826514 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 00:03:59.571592  826514 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0813 00:03:59.581901  826514 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0813 00:03:59.581952  826514 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0813 00:03:59.582101  826514 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 00:03:59.582125  826514 cni.go:93] Creating CNI manager for ""
	I0813 00:03:59.582132  826514 cni.go:154] 0 nodes found, recommending kindnet
	I0813 00:03:59.582137  826514 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0813 00:03:59.582146  826514 start_flags.go:277] config:
	{Name:multinode-20210813000359-820289 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:multinode-20210813000359-820289 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0}
	I0813 00:03:59.582286  826514 iso.go:123] acquiring lock: {Name:mk52748db467e5aa4b344902ee09c9ea40467a67 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 00:03:59.584151  826514 out.go:177] * Starting control plane node multinode-20210813000359-820289 in cluster multinode-20210813000359-820289
	I0813 00:03:59.584171  826514 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 00:03:59.584203  826514 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4
	I0813 00:03:59.584230  826514 cache.go:56] Caching tarball of preloaded images
	I0813 00:03:59.584342  826514 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0813 00:03:59.584363  826514 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on crio
	I0813 00:03:59.584683  826514 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/config.json ...
	I0813 00:03:59.584713  826514 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/config.json: {Name:mkec2eec7a60e18f2663b8e1f9d5d73c466c9366 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:03:59.584872  826514 cache.go:205] Successfully downloaded all kic artifacts
	I0813 00:03:59.584900  826514 start.go:313] acquiring machines lock for multinode-20210813000359-820289: {Name:mk2d46e46728943fc604570595bb7616469b4e8e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 00:03:59.584973  826514 start.go:317] acquired machines lock for "multinode-20210813000359-820289" in 33.085µs
	I0813 00:03:59.584999  826514 start.go:89] Provisioning new machine with config: &{Name:multinode-20210813000359-820289 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12122/minikube-v1.22.0-1628238775-12122.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3
ClusterName:multinode-20210813000359-820289 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 00:03:59.585077  826514 start.go:126] createHost starting for "" (driver="kvm2")
	I0813 00:03:59.587049  826514 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0813 00:03:59.587539  826514 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 00:03:59.587579  826514 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 00:03:59.598019  826514 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41123
	I0813 00:03:59.598493  826514 main.go:130] libmachine: () Calling .GetVersion
	I0813 00:03:59.598999  826514 main.go:130] libmachine: Using API Version  1
	I0813 00:03:59.599019  826514 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 00:03:59.599410  826514 main.go:130] libmachine: () Calling .GetMachineName
	I0813 00:03:59.599599  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetMachineName
	I0813 00:03:59.599774  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .DriverName
	I0813 00:03:59.599934  826514 start.go:160] libmachine.API.Create for "multinode-20210813000359-820289" (driver="kvm2")
	I0813 00:03:59.599963  826514 client.go:168] LocalClient.Create starting
	I0813 00:03:59.599995  826514 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem
	I0813 00:03:59.600055  826514 main.go:130] libmachine: Decoding PEM data...
	I0813 00:03:59.600072  826514 main.go:130] libmachine: Parsing certificate...
	I0813 00:03:59.600159  826514 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem
	I0813 00:03:59.600176  826514 main.go:130] libmachine: Decoding PEM data...
	I0813 00:03:59.600186  826514 main.go:130] libmachine: Parsing certificate...
	I0813 00:03:59.600225  826514 main.go:130] libmachine: Running pre-create checks...
	I0813 00:03:59.600234  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .PreCreateCheck
	I0813 00:03:59.600568  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetConfigRaw
	I0813 00:03:59.600985  826514 main.go:130] libmachine: Creating machine...
	I0813 00:03:59.601001  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .Create
	I0813 00:03:59.601148  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Creating KVM machine...
	I0813 00:03:59.603559  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | found existing default KVM network
	I0813 00:03:59.604569  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | I0813 00:03:59.604432  826537 network.go:288] reserving subnet 192.168.39.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.39.0:0xc0000105e0] misses:0}
	I0813 00:03:59.604614  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | I0813 00:03:59.604526  826537 network.go:235] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0813 00:03:59.626089  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | trying to create private KVM network mk-multinode-20210813000359-820289 192.168.39.0/24...
	I0813 00:03:59.848176  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | private KVM network mk-multinode-20210813000359-820289 192.168.39.0/24 created
	I0813 00:03:59.848215  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | I0813 00:03:59.848119  826537 common.go:108] Making disk image using store path: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	I0813 00:03:59.848236  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Setting up store path in /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813000359-820289 ...
	I0813 00:03:59.848286  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Building disk image from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/iso/minikube-v1.22.0-1628238775-12122.iso
	I0813 00:03:59.848312  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Downloading /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/iso/minikube-v1.22.0-1628238775-12122.iso...
	I0813 00:04:00.043272  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | I0813 00:04:00.043093  826537 common.go:115] Creating ssh key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813000359-820289/id_rsa...
	I0813 00:04:00.279344  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | I0813 00:04:00.279234  826537 common.go:121] Creating raw disk image: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813000359-820289/multinode-20210813000359-820289.rawdisk...
	I0813 00:04:00.279381  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | Writing magic tar header
	I0813 00:04:00.279401  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | Writing SSH key tar header
	I0813 00:04:00.279417  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | I0813 00:04:00.279353  826537 common.go:135] Fixing permissions on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813000359-820289 ...
	I0813 00:04:00.279532  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813000359-820289
	I0813 00:04:00.279565  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813000359-820289 (perms=drwx------)
	I0813 00:04:00.279580  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines
	I0813 00:04:00.279598  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	I0813 00:04:00.279611  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b
	I0813 00:04:00.279625  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0813 00:04:00.279662  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines (perms=drwxr-xr-x)
	I0813 00:04:00.279678  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | Checking permissions on dir: /home/jenkins
	I0813 00:04:00.279696  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | Checking permissions on dir: /home
	I0813 00:04:00.279740  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | Skipping /home - not owner
	I0813 00:04:00.279760  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube (perms=drwxr-xr-x)
	I0813 00:04:00.279794  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b (perms=drwxr-xr-x)
	I0813 00:04:00.279812  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxr-xr-x)
	I0813 00:04:00.279829  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0813 00:04:00.279846  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Creating domain...
	I0813 00:04:00.305341  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:96:6f:d8 in network default
	I0813 00:04:00.305868  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:00.305887  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Ensuring networks are active...
	I0813 00:04:00.307990  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Ensuring network default is active
	I0813 00:04:00.308362  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Ensuring network mk-multinode-20210813000359-820289 is active
	I0813 00:04:00.308912  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Getting domain xml...
	I0813 00:04:00.310651  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Creating domain...
	I0813 00:04:00.667613  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Waiting to get IP...
	I0813 00:04:00.668353  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:00.668812  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | unable to find current IP address of domain multinode-20210813000359-820289 in network mk-multinode-20210813000359-820289
	I0813 00:04:00.668884  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | I0813 00:04:00.668781  826537 retry.go:31] will retry after 263.082536ms: waiting for machine to come up
	I0813 00:04:00.933252  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:00.933785  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | unable to find current IP address of domain multinode-20210813000359-820289 in network mk-multinode-20210813000359-820289
	I0813 00:04:00.933817  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | I0813 00:04:00.933734  826537 retry.go:31] will retry after 381.329545ms: waiting for machine to come up
	I0813 00:04:01.316181  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:01.316693  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | unable to find current IP address of domain multinode-20210813000359-820289 in network mk-multinode-20210813000359-820289
	I0813 00:04:01.316718  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | I0813 00:04:01.316647  826537 retry.go:31] will retry after 422.765636ms: waiting for machine to come up
	I0813 00:04:01.741434  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:01.741860  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | unable to find current IP address of domain multinode-20210813000359-820289 in network mk-multinode-20210813000359-820289
	I0813 00:04:01.741887  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | I0813 00:04:01.741836  826537 retry.go:31] will retry after 473.074753ms: waiting for machine to come up
	I0813 00:04:02.216381  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:02.216916  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | unable to find current IP address of domain multinode-20210813000359-820289 in network mk-multinode-20210813000359-820289
	I0813 00:04:02.216956  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | I0813 00:04:02.216864  826537 retry.go:31] will retry after 587.352751ms: waiting for machine to come up
	I0813 00:04:02.805656  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:02.806132  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | unable to find current IP address of domain multinode-20210813000359-820289 in network mk-multinode-20210813000359-820289
	I0813 00:04:02.806160  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | I0813 00:04:02.806086  826537 retry.go:31] will retry after 834.206799ms: waiting for machine to come up
	I0813 00:04:03.642024  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:03.642483  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | unable to find current IP address of domain multinode-20210813000359-820289 in network mk-multinode-20210813000359-820289
	I0813 00:04:03.642508  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | I0813 00:04:03.642436  826537 retry.go:31] will retry after 746.553905ms: waiting for machine to come up
	I0813 00:04:04.390259  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:04.390717  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | unable to find current IP address of domain multinode-20210813000359-820289 in network mk-multinode-20210813000359-820289
	I0813 00:04:04.390743  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | I0813 00:04:04.390667  826537 retry.go:31] will retry after 987.362415ms: waiting for machine to come up
	I0813 00:04:05.379227  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:05.379784  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | unable to find current IP address of domain multinode-20210813000359-820289 in network mk-multinode-20210813000359-820289
	I0813 00:04:05.379812  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | I0813 00:04:05.379696  826537 retry.go:31] will retry after 1.189835008s: waiting for machine to come up
	I0813 00:04:06.570567  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:06.570986  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | unable to find current IP address of domain multinode-20210813000359-820289 in network mk-multinode-20210813000359-820289
	I0813 00:04:06.571022  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | I0813 00:04:06.570935  826537 retry.go:31] will retry after 1.677229867s: waiting for machine to come up
	I0813 00:04:08.250638  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:08.251111  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | unable to find current IP address of domain multinode-20210813000359-820289 in network mk-multinode-20210813000359-820289
	I0813 00:04:08.251136  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | I0813 00:04:08.251062  826537 retry.go:31] will retry after 2.346016261s: waiting for machine to come up
	I0813 00:04:10.598966  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:10.599428  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | unable to find current IP address of domain multinode-20210813000359-820289 in network mk-multinode-20210813000359-820289
	I0813 00:04:10.599459  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | I0813 00:04:10.599387  826537 retry.go:31] will retry after 3.36678925s: waiting for machine to come up
	I0813 00:04:13.967189  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:13.967680  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has current primary IP address 192.168.39.22 and MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:13.967732  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Found IP for machine: 192.168.39.22
	I0813 00:04:13.967752  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Reserving static IP address...
	I0813 00:04:13.968111  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | unable to find host DHCP lease matching {name: "multinode-20210813000359-820289", mac: "52:54:00:b5:e4:55", ip: "192.168.39.22"} in network mk-multinode-20210813000359-820289
	I0813 00:04:14.016184  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | Getting to WaitForSSH function...
	I0813 00:04:14.016216  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Reserved static IP address: 192.168.39.22
	I0813 00:04:14.016232  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Waiting for SSH to be available...
	I0813 00:04:14.021092  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:14.021436  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e4:55", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:04:13 +0000 UTC Type:0 Mac:52:54:00:b5:e4:55 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b5:e4:55}
	I0813 00:04:14.021461  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined IP address 192.168.39.22 and MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:14.021579  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | Using SSH client type: external
	I0813 00:04:14.021611  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813000359-820289/id_rsa (-rw-------)
	I0813 00:04:14.021659  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.22 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813000359-820289/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0813 00:04:14.021677  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | About to run SSH command:
	I0813 00:04:14.021706  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | exit 0
	I0813 00:04:14.151163  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | SSH cmd err, output: <nil>: 
	I0813 00:04:14.151557  826514 main.go:130] libmachine: (multinode-20210813000359-820289) KVM machine creation complete!
	I0813 00:04:14.151637  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetConfigRaw
	I0813 00:04:14.152186  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .DriverName
	I0813 00:04:14.152397  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .DriverName
	I0813 00:04:14.152630  826514 main.go:130] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0813 00:04:14.152647  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetState
	I0813 00:04:14.155202  826514 main.go:130] libmachine: Detecting operating system of created instance...
	I0813 00:04:14.155218  826514 main.go:130] libmachine: Waiting for SSH to be available...
	I0813 00:04:14.155225  826514 main.go:130] libmachine: Getting to WaitForSSH function...
	I0813 00:04:14.155231  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHHostname
	I0813 00:04:14.159768  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:14.160079  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e4:55", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:04:13 +0000 UTC Type:0 Mac:52:54:00:b5:e4:55 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:multinode-20210813000359-820289 Clientid:01:52:54:00:b5:e4:55}
	I0813 00:04:14.160112  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined IP address 192.168.39.22 and MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:14.160215  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHPort
	I0813 00:04:14.160394  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHKeyPath
	I0813 00:04:14.160525  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHKeyPath
	I0813 00:04:14.160635  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHUsername
	I0813 00:04:14.160826  826514 main.go:130] libmachine: Using SSH client type: native
	I0813 00:04:14.161035  826514 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0813 00:04:14.161052  826514 main.go:130] libmachine: About to run SSH command:
	exit 0
	I0813 00:04:14.270761  826514 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 00:04:14.270785  826514 main.go:130] libmachine: Detecting the provisioner...
	I0813 00:04:14.270793  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHHostname
	I0813 00:04:14.276005  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:14.276321  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e4:55", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:04:13 +0000 UTC Type:0 Mac:52:54:00:b5:e4:55 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:multinode-20210813000359-820289 Clientid:01:52:54:00:b5:e4:55}
	I0813 00:04:14.276357  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined IP address 192.168.39.22 and MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:14.276551  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHPort
	I0813 00:04:14.276749  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHKeyPath
	I0813 00:04:14.276918  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHKeyPath
	I0813 00:04:14.277089  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHUsername
	I0813 00:04:14.277258  826514 main.go:130] libmachine: Using SSH client type: native
	I0813 00:04:14.277400  826514 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0813 00:04:14.277411  826514 main.go:130] libmachine: About to run SSH command:
	cat /etc/os-release
	I0813 00:04:14.388249  826514 main.go:130] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2020.02.12
	ID=buildroot
	VERSION_ID=2020.02.12
	PRETTY_NAME="Buildroot 2020.02.12"
	
	I0813 00:04:14.388309  826514 main.go:130] libmachine: found compatible host: buildroot
	I0813 00:04:14.388319  826514 main.go:130] libmachine: Provisioning with buildroot...
	I0813 00:04:14.388328  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetMachineName
	I0813 00:04:14.388579  826514 buildroot.go:166] provisioning hostname "multinode-20210813000359-820289"
	I0813 00:04:14.388608  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetMachineName
	I0813 00:04:14.388769  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHHostname
	I0813 00:04:14.393460  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:14.393774  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e4:55", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:04:13 +0000 UTC Type:0 Mac:52:54:00:b5:e4:55 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:multinode-20210813000359-820289 Clientid:01:52:54:00:b5:e4:55}
	I0813 00:04:14.393807  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined IP address 192.168.39.22 and MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:14.393876  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHPort
	I0813 00:04:14.394042  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHKeyPath
	I0813 00:04:14.394165  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHKeyPath
	I0813 00:04:14.394266  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHUsername
	I0813 00:04:14.394436  826514 main.go:130] libmachine: Using SSH client type: native
	I0813 00:04:14.394581  826514 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0813 00:04:14.394600  826514 main.go:130] libmachine: About to run SSH command:
	sudo hostname multinode-20210813000359-820289 && echo "multinode-20210813000359-820289" | sudo tee /etc/hostname
	I0813 00:04:14.513606  826514 main.go:130] libmachine: SSH cmd err, output: <nil>: multinode-20210813000359-820289
	
	I0813 00:04:14.513629  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHHostname
	I0813 00:04:14.518489  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:14.518820  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e4:55", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:04:13 +0000 UTC Type:0 Mac:52:54:00:b5:e4:55 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:multinode-20210813000359-820289 Clientid:01:52:54:00:b5:e4:55}
	I0813 00:04:14.518844  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined IP address 192.168.39.22 and MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:14.518980  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHPort
	I0813 00:04:14.519153  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHKeyPath
	I0813 00:04:14.519314  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHKeyPath
	I0813 00:04:14.519454  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHUsername
	I0813 00:04:14.519614  826514 main.go:130] libmachine: Using SSH client type: native
	I0813 00:04:14.519795  826514 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0813 00:04:14.519819  826514 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-20210813000359-820289' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-20210813000359-820289/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-20210813000359-820289' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 00:04:14.637521  826514 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 00:04:14.637545  826514 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem ServerCertRemotePath
:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube}
	I0813 00:04:14.637591  826514 buildroot.go:174] setting up certificates
	I0813 00:04:14.637602  826514 provision.go:83] configureAuth start
	I0813 00:04:14.637624  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetMachineName
	I0813 00:04:14.637810  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetIP
	I0813 00:04:14.642593  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:14.642897  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e4:55", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:04:13 +0000 UTC Type:0 Mac:52:54:00:b5:e4:55 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:multinode-20210813000359-820289 Clientid:01:52:54:00:b5:e4:55}
	I0813 00:04:14.642919  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined IP address 192.168.39.22 and MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:14.643011  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHHostname
	I0813 00:04:14.647090  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:14.647337  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e4:55", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:04:13 +0000 UTC Type:0 Mac:52:54:00:b5:e4:55 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:multinode-20210813000359-820289 Clientid:01:52:54:00:b5:e4:55}
	I0813 00:04:14.647370  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined IP address 192.168.39.22 and MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:14.647425  826514 provision.go:137] copyHostCerts
	I0813 00:04:14.647454  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem
	I0813 00:04:14.647492  826514 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem, removing ...
	I0813 00:04:14.647502  826514 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem
	I0813 00:04:14.647555  826514 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem (1679 bytes)
	I0813 00:04:14.647614  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem
	I0813 00:04:14.647636  826514 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem, removing ...
	I0813 00:04:14.647641  826514 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem
	I0813 00:04:14.647661  826514 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem (1078 bytes)
	I0813 00:04:14.647762  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem
	I0813 00:04:14.647787  826514 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem, removing ...
	I0813 00:04:14.647794  826514 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem
	I0813 00:04:14.647818  826514 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem (1123 bytes)
	I0813 00:04:14.647864  826514 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem org=jenkins.multinode-20210813000359-820289 san=[192.168.39.22 192.168.39.22 localhost 127.0.0.1 minikube multinode-20210813000359-820289]
	I0813 00:04:14.939227  826514 provision.go:171] copyRemoteCerts
	I0813 00:04:14.939287  826514 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 00:04:14.939317  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHHostname
	I0813 00:04:14.944061  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:14.944333  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e4:55", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:04:13 +0000 UTC Type:0 Mac:52:54:00:b5:e4:55 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:multinode-20210813000359-820289 Clientid:01:52:54:00:b5:e4:55}
	I0813 00:04:14.944368  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined IP address 192.168.39.22 and MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:14.944478  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHPort
	I0813 00:04:14.944674  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHKeyPath
	I0813 00:04:14.944821  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHUsername
	I0813 00:04:14.944950  826514 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813000359-820289/id_rsa Username:docker}
	I0813 00:04:15.026924  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0813 00:04:15.026968  826514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 00:04:15.042656  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0813 00:04:15.042713  826514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0813 00:04:15.058311  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0813 00:04:15.058359  826514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 00:04:15.074148  826514 provision.go:86] duration metric: configureAuth took 436.531554ms
	I0813 00:04:15.074173  826514 buildroot.go:189] setting minikube options for container-runtime
	I0813 00:04:15.074488  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHHostname
	I0813 00:04:15.080330  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:15.080752  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e4:55", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:04:13 +0000 UTC Type:0 Mac:52:54:00:b5:e4:55 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:multinode-20210813000359-820289 Clientid:01:52:54:00:b5:e4:55}
	I0813 00:04:15.080796  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined IP address 192.168.39.22 and MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:15.080943  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHPort
	I0813 00:04:15.081128  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHKeyPath
	I0813 00:04:15.081239  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHKeyPath
	I0813 00:04:15.081375  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHUsername
	I0813 00:04:15.081520  826514 main.go:130] libmachine: Using SSH client type: native
	I0813 00:04:15.081653  826514 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0813 00:04:15.081668  826514 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 00:04:15.760734  826514 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 00:04:15.760791  826514 main.go:130] libmachine: Checking connection to Docker...
	I0813 00:04:15.760802  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetURL
	I0813 00:04:15.763347  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | Using libvirt version 3000000
	I0813 00:04:15.767747  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:15.768070  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e4:55", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:04:13 +0000 UTC Type:0 Mac:52:54:00:b5:e4:55 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:multinode-20210813000359-820289 Clientid:01:52:54:00:b5:e4:55}
	I0813 00:04:15.768106  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined IP address 192.168.39.22 and MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:15.768262  826514 main.go:130] libmachine: Docker is up and running!
	I0813 00:04:15.768284  826514 main.go:130] libmachine: Reticulating splines...
	I0813 00:04:15.768292  826514 client.go:171] LocalClient.Create took 16.168318436s
	I0813 00:04:15.768310  826514 start.go:168] duration metric: libmachine.API.Create for "multinode-20210813000359-820289" took 16.168377925s
	I0813 00:04:15.768321  826514 start.go:267] post-start starting for "multinode-20210813000359-820289" (driver="kvm2")
	I0813 00:04:15.768327  826514 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 00:04:15.768345  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .DriverName
	I0813 00:04:15.768551  826514 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 00:04:15.768582  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHHostname
	I0813 00:04:15.772905  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:15.773227  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e4:55", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:04:13 +0000 UTC Type:0 Mac:52:54:00:b5:e4:55 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:multinode-20210813000359-820289 Clientid:01:52:54:00:b5:e4:55}
	I0813 00:04:15.773258  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined IP address 192.168.39.22 and MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:15.773366  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHPort
	I0813 00:04:15.773535  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHKeyPath
	I0813 00:04:15.773688  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHUsername
	I0813 00:04:15.773815  826514 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813000359-820289/id_rsa Username:docker}
	I0813 00:04:15.854725  826514 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 00:04:15.859087  826514 command_runner.go:124] > NAME=Buildroot
	I0813 00:04:15.859105  826514 command_runner.go:124] > VERSION=2020.02.12
	I0813 00:04:15.859111  826514 command_runner.go:124] > ID=buildroot
	I0813 00:04:15.859118  826514 command_runner.go:124] > VERSION_ID=2020.02.12
	I0813 00:04:15.859125  826514 command_runner.go:124] > PRETTY_NAME="Buildroot 2020.02.12"
	I0813 00:04:15.859470  826514 info.go:137] Remote host: Buildroot 2020.02.12
	I0813 00:04:15.859490  826514 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/addons for local assets ...
	I0813 00:04:15.859539  826514 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files for local assets ...
	I0813 00:04:15.859659  826514 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/8202892.pem -> 8202892.pem in /etc/ssl/certs
	I0813 00:04:15.859671  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/8202892.pem -> /etc/ssl/certs/8202892.pem
	I0813 00:04:15.859795  826514 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 00:04:15.866931  826514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/8202892.pem --> /etc/ssl/certs/8202892.pem (1708 bytes)
	I0813 00:04:15.882819  826514 start.go:270] post-start completed in 114.487613ms
	I0813 00:04:15.882861  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetConfigRaw
	I0813 00:04:15.883436  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetIP
	I0813 00:04:15.888100  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:15.888415  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e4:55", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:04:13 +0000 UTC Type:0 Mac:52:54:00:b5:e4:55 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:multinode-20210813000359-820289 Clientid:01:52:54:00:b5:e4:55}
	I0813 00:04:15.888445  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined IP address 192.168.39.22 and MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:15.888637  826514 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/config.json ...
	I0813 00:04:15.888801  826514 start.go:129] duration metric: createHost completed in 16.303716534s
	I0813 00:04:15.888815  826514 start.go:80] releasing machines lock for "multinode-20210813000359-820289", held for 16.303832114s
	I0813 00:04:15.888846  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .DriverName
	I0813 00:04:15.889026  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetIP
	I0813 00:04:15.893253  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:15.893508  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e4:55", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:04:13 +0000 UTC Type:0 Mac:52:54:00:b5:e4:55 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:multinode-20210813000359-820289 Clientid:01:52:54:00:b5:e4:55}
	I0813 00:04:15.893543  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined IP address 192.168.39.22 and MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:15.893685  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .DriverName
	I0813 00:04:15.893866  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .DriverName
	I0813 00:04:15.894287  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .DriverName
	I0813 00:04:15.894476  826514 ssh_runner.go:149] Run: systemctl --version
	I0813 00:04:15.894502  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHHostname
	I0813 00:04:15.894536  826514 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 00:04:15.894579  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHHostname
	I0813 00:04:15.898888  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:15.899229  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e4:55", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:04:13 +0000 UTC Type:0 Mac:52:54:00:b5:e4:55 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:multinode-20210813000359-820289 Clientid:01:52:54:00:b5:e4:55}
	I0813 00:04:15.899260  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined IP address 192.168.39.22 and MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:15.899355  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHPort
	I0813 00:04:15.899505  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHKeyPath
	I0813 00:04:15.899638  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHUsername
	I0813 00:04:15.899773  826514 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813000359-820289/id_rsa Username:docker}
	I0813 00:04:15.900004  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:15.900298  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e4:55", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:04:13 +0000 UTC Type:0 Mac:52:54:00:b5:e4:55 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:multinode-20210813000359-820289 Clientid:01:52:54:00:b5:e4:55}
	I0813 00:04:15.900321  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined IP address 192.168.39.22 and MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:15.900502  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHPort
	I0813 00:04:15.900688  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHKeyPath
	I0813 00:04:15.900837  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHUsername
	I0813 00:04:15.900965  826514 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813000359-820289/id_rsa Username:docker}
	I0813 00:04:15.981023  826514 command_runner.go:124] > systemd 244 (244)
	I0813 00:04:15.981064  826514 command_runner.go:124] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK +SYSVINIT +UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0813 00:04:15.981095  826514 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 00:04:15.981185  826514 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 00:04:16.007266  826514 command_runner.go:124] > <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
	I0813 00:04:16.007294  826514 command_runner.go:124] > <TITLE>302 Moved</TITLE></HEAD><BODY>
	I0813 00:04:16.007304  826514 command_runner.go:124] > <H1>302 Moved</H1>
	I0813 00:04:16.007312  826514 command_runner.go:124] > The document has moved
	I0813 00:04:16.007323  826514 command_runner.go:124] > <A HREF="https://cloud.google.com/container-registry/">here</A>.
	I0813 00:04:16.007336  826514 command_runner.go:124] > </BODY></HTML>
	I0813 00:04:16.007461  826514 command_runner.go:124] ! time="2021-08-13T00:04:15Z" level=warning msg="image connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	I0813 00:04:17.989317  826514 command_runner.go:124] ! time="2021-08-13T00:04:17Z" level=error msg="connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	I0813 00:04:19.976052  826514 command_runner.go:124] ! time="2021-08-13T00:04:19Z" level=error msg="connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	I0813 00:04:19.980765  826514 command_runner.go:124] > {
	I0813 00:04:19.980782  826514 command_runner.go:124] >   "images": [
	I0813 00:04:19.980787  826514 command_runner.go:124] >   ]
	I0813 00:04:19.980791  826514 command_runner.go:124] > }
	I0813 00:04:19.980810  826514 ssh_runner.go:189] Completed: sudo crictl images --output json: (3.999609413s)
	I0813 00:04:19.980902  826514 crio.go:420] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.21.3". assuming images are not preloaded.
	I0813 00:04:19.980950  826514 ssh_runner.go:149] Run: which lz4
	I0813 00:04:19.984746  826514 command_runner.go:124] > /bin/lz4
	I0813 00:04:19.984881  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0813 00:04:19.984969  826514 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0813 00:04:19.989446  826514 command_runner.go:124] ! stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0813 00:04:19.990091  826514 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0813 00:04:19.990122  826514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (576184326 bytes)
	I0813 00:04:22.875606  826514 crio.go:362] Took 2.890671 seconds to copy over tarball
	I0813 00:04:22.875677  826514 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0813 00:04:27.289558  826514 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (4.413845256s)
	I0813 00:04:27.289590  826514 crio.go:369] Took 4.413952 seconds t extract the tarball
	I0813 00:04:27.289604  826514 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0813 00:04:27.328003  826514 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 00:04:27.340687  826514 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 00:04:27.350625  826514 docker.go:153] disabling docker service ...
	I0813 00:04:27.350666  826514 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 00:04:27.361626  826514 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 00:04:27.372036  826514 command_runner.go:124] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0813 00:04:27.372382  826514 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 00:04:27.495882  826514 command_runner.go:124] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0813 00:04:27.495941  826514 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 00:04:27.630479  826514 command_runner.go:124] ! Unit docker.service does not exist, proceeding anyway.
	I0813 00:04:27.630512  826514 command_runner.go:124] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0813 00:04:27.630566  826514 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 00:04:27.640371  826514 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 00:04:27.652853  826514 command_runner.go:124] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0813 00:04:27.652867  826514 command_runner.go:124] > image-endpoint: unix:///var/run/crio/crio.sock
	I0813 00:04:27.653303  826514 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0813 00:04:27.660562  826514 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0813 00:04:27.660581  826514 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0813 00:04:27.668141  826514 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 00:04:27.674532  826514 command_runner.go:124] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 00:04:27.674885  826514 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 00:04:27.674939  826514 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 00:04:27.689306  826514 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 00:04:27.695945  826514 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 00:04:27.822191  826514 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 00:04:27.959982  826514 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 00:04:27.960051  826514 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 00:04:27.964993  826514 command_runner.go:124] >   File: /var/run/crio/crio.sock
	I0813 00:04:27.965015  826514 command_runner.go:124] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0813 00:04:27.965022  826514 command_runner.go:124] > Device: 14h/20d	Inode: 29936       Links: 1
	I0813 00:04:27.965029  826514 command_runner.go:124] > Access: (0755/srwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0813 00:04:27.965034  826514 command_runner.go:124] > Access: 2021-08-13 00:04:19.926943447 +0000
	I0813 00:04:27.965043  826514 command_runner.go:124] > Modify: 2021-08-13 00:04:15.656484954 +0000
	I0813 00:04:27.965051  826514 command_runner.go:124] > Change: 2021-08-13 00:04:15.656484954 +0000
	I0813 00:04:27.965057  826514 command_runner.go:124] >  Birth: -
	I0813 00:04:27.965289  826514 start.go:417] Will wait 60s for crictl version
	I0813 00:04:27.965344  826514 ssh_runner.go:149] Run: sudo crictl version
	I0813 00:04:27.995939  826514 command_runner.go:124] > Version:  0.1.0
	I0813 00:04:27.995957  826514 command_runner.go:124] > RuntimeName:  cri-o
	I0813 00:04:27.995961  826514 command_runner.go:124] > RuntimeVersion:  1.20.2
	I0813 00:04:27.995967  826514 command_runner.go:124] > RuntimeApiVersion:  v1alpha1
	I0813 00:04:27.996046  826514 start.go:426] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.2
	RuntimeApiVersion:  v1alpha1
	I0813 00:04:27.996116  826514 ssh_runner.go:149] Run: crio --version
	I0813 00:04:28.101692  826514 command_runner.go:124] > crio version 1.20.2
	I0813 00:04:28.101721  826514 command_runner.go:124] > Version:       1.20.2
	I0813 00:04:28.101728  826514 command_runner.go:124] > GitCommit:     d5a999ad0a35d895ded554e1e18c142075501a98
	I0813 00:04:28.101733  826514 command_runner.go:124] > GitTreeState:  clean
	I0813 00:04:28.101740  826514 command_runner.go:124] > BuildDate:     2021-08-06T09:19:16Z
	I0813 00:04:28.101748  826514 command_runner.go:124] > GoVersion:     go1.13.15
	I0813 00:04:28.101754  826514 command_runner.go:124] > Compiler:      gc
	I0813 00:04:28.101761  826514 command_runner.go:124] > Platform:      linux/amd64
	I0813 00:04:28.103227  826514 command_runner.go:124] ! time="2021-08-13T00:04:28Z" level=info msg="Starting CRI-O, version: 1.20.2, git: d5a999ad0a35d895ded554e1e18c142075501a98(clean)"
	I0813 00:04:28.103308  826514 ssh_runner.go:149] Run: crio --version
	I0813 00:04:28.371922  826514 command_runner.go:124] > crio version 1.20.2
	I0813 00:04:28.371946  826514 command_runner.go:124] > Version:       1.20.2
	I0813 00:04:28.371954  826514 command_runner.go:124] > GitCommit:     d5a999ad0a35d895ded554e1e18c142075501a98
	I0813 00:04:28.371958  826514 command_runner.go:124] > GitTreeState:  clean
	I0813 00:04:28.371964  826514 command_runner.go:124] > BuildDate:     2021-08-06T09:19:16Z
	I0813 00:04:28.371968  826514 command_runner.go:124] > GoVersion:     go1.13.15
	I0813 00:04:28.371972  826514 command_runner.go:124] > Compiler:      gc
	I0813 00:04:28.371977  826514 command_runner.go:124] > Platform:      linux/amd64
	I0813 00:04:28.373487  826514 command_runner.go:124] ! time="2021-08-13T00:04:28Z" level=info msg="Starting CRI-O, version: 1.20.2, git: d5a999ad0a35d895ded554e1e18c142075501a98(clean)"
	I0813 00:04:30.488898  826514 out.go:177] * Preparing Kubernetes v1.21.3 on CRI-O 1.20.2 ...
	I0813 00:04:30.489022  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetIP
	I0813 00:04:31.496831  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:31.497127  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e4:55", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:04:13 +0000 UTC Type:0 Mac:52:54:00:b5:e4:55 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:multinode-20210813000359-820289 Clientid:01:52:54:00:b5:e4:55}
	I0813 00:04:31.497164  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined IP address 192.168.39.22 and MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:31.497356  826514 ssh_runner.go:149] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0813 00:04:31.502501  826514 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 00:04:31.513590  826514 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 00:04:31.513650  826514 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 00:04:31.586120  826514 command_runner.go:124] > {
	I0813 00:04:31.586143  826514 command_runner.go:124] >   "images": [
	I0813 00:04:31.586148  826514 command_runner.go:124] >     {
	I0813 00:04:31.586156  826514 command_runner.go:124] >       "id": "6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb",
	I0813 00:04:31.586161  826514 command_runner.go:124] >       "repoTags": [
	I0813 00:04:31.586167  826514 command_runner.go:124] >         "docker.io/kindest/kindnetd:v20210326-1e038dc5"
	I0813 00:04:31.586171  826514 command_runner.go:124] >       ],
	I0813 00:04:31.586176  826514 command_runner.go:124] >       "repoDigests": [
	I0813 00:04:31.586186  826514 command_runner.go:124] >         "docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1",
	I0813 00:04:31.586196  826514 command_runner.go:124] >         "docker.io/kindest/kindnetd@sha256:838bc1706e38391aefaa31fd52619fe8e57ad3dfb0d0ff414d902367fcc24c3c"
	I0813 00:04:31.586200  826514 command_runner.go:124] >       ],
	I0813 00:04:31.586205  826514 command_runner.go:124] >       "size": "119984626",
	I0813 00:04:31.586209  826514 command_runner.go:124] >       "uid": null,
	I0813 00:04:31.586213  826514 command_runner.go:124] >       "username": "",
	I0813 00:04:31.586224  826514 command_runner.go:124] >       "spec": null
	I0813 00:04:31.586232  826514 command_runner.go:124] >     },
	I0813 00:04:31.586236  826514 command_runner.go:124] >     {
	I0813 00:04:31.586243  826514 command_runner.go:124] >       "id": "9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db",
	I0813 00:04:31.586248  826514 command_runner.go:124] >       "repoTags": [
	I0813 00:04:31.586254  826514 command_runner.go:124] >         "docker.io/kubernetesui/dashboard:v2.1.0"
	I0813 00:04:31.586260  826514 command_runner.go:124] >       ],
	I0813 00:04:31.586264  826514 command_runner.go:124] >       "repoDigests": [
	I0813 00:04:31.586274  826514 command_runner.go:124] >         "docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f",
	I0813 00:04:31.586284  826514 command_runner.go:124] >         "docker.io/kubernetesui/dashboard@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6"
	I0813 00:04:31.586288  826514 command_runner.go:124] >       ],
	I0813 00:04:31.586293  826514 command_runner.go:124] >       "size": "228528983",
	I0813 00:04:31.586296  826514 command_runner.go:124] >       "uid": null,
	I0813 00:04:31.586301  826514 command_runner.go:124] >       "username": "nonroot",
	I0813 00:04:31.586308  826514 command_runner.go:124] >       "spec": null
	I0813 00:04:31.586312  826514 command_runner.go:124] >     },
	I0813 00:04:31.586316  826514 command_runner.go:124] >     {
	I0813 00:04:31.586322  826514 command_runner.go:124] >       "id": "86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4",
	I0813 00:04:31.586327  826514 command_runner.go:124] >       "repoTags": [
	I0813 00:04:31.586333  826514 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper:v1.0.4"
	I0813 00:04:31.586340  826514 command_runner.go:124] >       ],
	I0813 00:04:31.586345  826514 command_runner.go:124] >       "repoDigests": [
	I0813 00:04:31.586353  826514 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper@sha256:555981a24f184420f3be0c79d4efb6c948a85cfce84034f85a563f4151a81cbf",
	I0813 00:04:31.586364  826514 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper@sha256:d78f995c07124874c2a2e9b404cffa6bc6233668d63d6c6210574971f3d5914b"
	I0813 00:04:31.586367  826514 command_runner.go:124] >       ],
	I0813 00:04:31.586372  826514 command_runner.go:124] >       "size": "36950651",
	I0813 00:04:31.586376  826514 command_runner.go:124] >       "uid": null,
	I0813 00:04:31.586380  826514 command_runner.go:124] >       "username": "",
	I0813 00:04:31.586386  826514 command_runner.go:124] >       "spec": null
	I0813 00:04:31.586389  826514 command_runner.go:124] >     },
	I0813 00:04:31.586393  826514 command_runner.go:124] >     {
	I0813 00:04:31.586399  826514 command_runner.go:124] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0813 00:04:31.586406  826514 command_runner.go:124] >       "repoTags": [
	I0813 00:04:31.586411  826514 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0813 00:04:31.586414  826514 command_runner.go:124] >       ],
	I0813 00:04:31.586418  826514 command_runner.go:124] >       "repoDigests": [
	I0813 00:04:31.586428  826514 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0813 00:04:31.586437  826514 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0813 00:04:31.586442  826514 command_runner.go:124] >       ],
	I0813 00:04:31.586447  826514 command_runner.go:124] >       "size": "31470524",
	I0813 00:04:31.586454  826514 command_runner.go:124] >       "uid": null,
	I0813 00:04:31.586459  826514 command_runner.go:124] >       "username": "",
	I0813 00:04:31.586463  826514 command_runner.go:124] >       "spec": null
	I0813 00:04:31.586466  826514 command_runner.go:124] >     },
	I0813 00:04:31.586470  826514 command_runner.go:124] >     {
	I0813 00:04:31.586476  826514 command_runner.go:124] >       "id": "296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899",
	I0813 00:04:31.586481  826514 command_runner.go:124] >       "repoTags": [
	I0813 00:04:31.586487  826514 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns:v1.8.0"
	I0813 00:04:31.586491  826514 command_runner.go:124] >       ],
	I0813 00:04:31.586495  826514 command_runner.go:124] >       "repoDigests": [
	I0813 00:04:31.586503  826514 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61",
	I0813 00:04:31.586513  826514 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e"
	I0813 00:04:31.586517  826514 command_runner.go:124] >       ],
	I0813 00:04:31.586521  826514 command_runner.go:124] >       "size": "42585056",
	I0813 00:04:31.586525  826514 command_runner.go:124] >       "uid": null,
	I0813 00:04:31.586529  826514 command_runner.go:124] >       "username": "",
	I0813 00:04:31.586534  826514 command_runner.go:124] >       "spec": null
	I0813 00:04:31.586538  826514 command_runner.go:124] >     },
	I0813 00:04:31.586542  826514 command_runner.go:124] >     {
	I0813 00:04:31.586548  826514 command_runner.go:124] >       "id": "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934",
	I0813 00:04:31.586554  826514 command_runner.go:124] >       "repoTags": [
	I0813 00:04:31.586558  826514 command_runner.go:124] >         "k8s.gcr.io/etcd:3.4.13-0"
	I0813 00:04:31.586563  826514 command_runner.go:124] >       ],
	I0813 00:04:31.586566  826514 command_runner.go:124] >       "repoDigests": [
	I0813 00:04:31.586574  826514 command_runner.go:124] >         "k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2",
	I0813 00:04:31.586581  826514 command_runner.go:124] >         "k8s.gcr.io/etcd@sha256:bd4d2c9a19be8a492bc79df53eee199fd04b415e9993eb69f7718052602a147a"
	I0813 00:04:31.586585  826514 command_runner.go:124] >       ],
	I0813 00:04:31.586589  826514 command_runner.go:124] >       "size": "254662613",
	I0813 00:04:31.586597  826514 command_runner.go:124] >       "uid": null,
	I0813 00:04:31.586601  826514 command_runner.go:124] >       "username": "",
	I0813 00:04:31.586607  826514 command_runner.go:124] >       "spec": null
	I0813 00:04:31.586610  826514 command_runner.go:124] >     },
	I0813 00:04:31.586613  826514 command_runner.go:124] >     {
	I0813 00:04:31.586619  826514 command_runner.go:124] >       "id": "3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80",
	I0813 00:04:31.586626  826514 command_runner.go:124] >       "repoTags": [
	I0813 00:04:31.586630  826514 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver:v1.21.3"
	I0813 00:04:31.586634  826514 command_runner.go:124] >       ],
	I0813 00:04:31.586638  826514 command_runner.go:124] >       "repoDigests": [
	I0813 00:04:31.586645  826514 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2",
	I0813 00:04:31.586653  826514 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver@sha256:910cfdf034262c7b68ecb17c0885f39bdaaad07d87c9a5b6320819d8500b7ee5"
	I0813 00:04:31.586657  826514 command_runner.go:124] >       ],
	I0813 00:04:31.586661  826514 command_runner.go:124] >       "size": "126878961",
	I0813 00:04:31.586666  826514 command_runner.go:124] >       "uid": {
	I0813 00:04:31.586670  826514 command_runner.go:124] >         "value": "0"
	I0813 00:04:31.586674  826514 command_runner.go:124] >       },
	I0813 00:04:31.586678  826514 command_runner.go:124] >       "username": "",
	I0813 00:04:31.586682  826514 command_runner.go:124] >       "spec": null
	I0813 00:04:31.586685  826514 command_runner.go:124] >     },
	I0813 00:04:31.586688  826514 command_runner.go:124] >     {
	I0813 00:04:31.586695  826514 command_runner.go:124] >       "id": "bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9",
	I0813 00:04:31.586701  826514 command_runner.go:124] >       "repoTags": [
	I0813 00:04:31.586706  826514 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager:v1.21.3"
	I0813 00:04:31.586710  826514 command_runner.go:124] >       ],
	I0813 00:04:31.586714  826514 command_runner.go:124] >       "repoDigests": [
	I0813 00:04:31.586721  826514 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b",
	I0813 00:04:31.586734  826514 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager@sha256:7fb1f6614597c255b475ed8abf553e0d4e8ea211b06a90bed53eaddcfb9c354f"
	I0813 00:04:31.586739  826514 command_runner.go:124] >       ],
	I0813 00:04:31.586761  826514 command_runner.go:124] >       "size": "121087578",
	I0813 00:04:31.586768  826514 command_runner.go:124] >       "uid": {
	I0813 00:04:31.586772  826514 command_runner.go:124] >         "value": "0"
	I0813 00:04:31.586775  826514 command_runner.go:124] >       },
	I0813 00:04:31.586784  826514 command_runner.go:124] >       "username": "",
	I0813 00:04:31.586801  826514 command_runner.go:124] >       "spec": null
	I0813 00:04:31.586806  826514 command_runner.go:124] >     },
	I0813 00:04:31.586810  826514 command_runner.go:124] >     {
	I0813 00:04:31.586816  826514 command_runner.go:124] >       "id": "adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92",
	I0813 00:04:31.586822  826514 command_runner.go:124] >       "repoTags": [
	I0813 00:04:31.586827  826514 command_runner.go:124] >         "k8s.gcr.io/kube-proxy:v1.21.3"
	I0813 00:04:31.586833  826514 command_runner.go:124] >       ],
	I0813 00:04:31.586837  826514 command_runner.go:124] >       "repoDigests": [
	I0813 00:04:31.586844  826514 command_runner.go:124] >         "k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b",
	I0813 00:04:31.586855  826514 command_runner.go:124] >         "k8s.gcr.io/kube-proxy@sha256:c7778d7b97b2a822c3fe3e921d104ac42afbd38268de8df03557465780886627"
	I0813 00:04:31.586858  826514 command_runner.go:124] >       ],
	I0813 00:04:31.586862  826514 command_runner.go:124] >       "size": "105129702",
	I0813 00:04:31.586869  826514 command_runner.go:124] >       "uid": null,
	I0813 00:04:31.586873  826514 command_runner.go:124] >       "username": "",
	I0813 00:04:31.586877  826514 command_runner.go:124] >       "spec": null
	I0813 00:04:31.586880  826514 command_runner.go:124] >     },
	I0813 00:04:31.586884  826514 command_runner.go:124] >     {
	I0813 00:04:31.586890  826514 command_runner.go:124] >       "id": "6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a",
	I0813 00:04:31.586895  826514 command_runner.go:124] >       "repoTags": [
	I0813 00:04:31.586900  826514 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler:v1.21.3"
	I0813 00:04:31.586903  826514 command_runner.go:124] >       ],
	I0813 00:04:31.586907  826514 command_runner.go:124] >       "repoDigests": [
	I0813 00:04:31.586915  826514 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4",
	I0813 00:04:31.586924  826514 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler@sha256:b61779ea1bd936c137b25b3a7baa5551fbbd84fed8568d15c7c85ab1139521c0"
	I0813 00:04:31.586929  826514 command_runner.go:124] >       ],
	I0813 00:04:31.586933  826514 command_runner.go:124] >       "size": "51893338",
	I0813 00:04:31.586937  826514 command_runner.go:124] >       "uid": {
	I0813 00:04:31.586941  826514 command_runner.go:124] >         "value": "0"
	I0813 00:04:31.586945  826514 command_runner.go:124] >       },
	I0813 00:04:31.586949  826514 command_runner.go:124] >       "username": "",
	I0813 00:04:31.586952  826514 command_runner.go:124] >       "spec": null
	I0813 00:04:31.586956  826514 command_runner.go:124] >     },
	I0813 00:04:31.586959  826514 command_runner.go:124] >     {
	I0813 00:04:31.586966  826514 command_runner.go:124] >       "id": "0f8457a4c2ecaceac160805013dc3c61c63a1ff3dee74a473a36249a748e0253",
	I0813 00:04:31.586971  826514 command_runner.go:124] >       "repoTags": [
	I0813 00:04:31.586975  826514 command_runner.go:124] >         "k8s.gcr.io/pause:3.4.1"
	I0813 00:04:31.586981  826514 command_runner.go:124] >       ],
	I0813 00:04:31.586984  826514 command_runner.go:124] >       "repoDigests": [
	I0813 00:04:31.586992  826514 command_runner.go:124] >         "k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810",
	I0813 00:04:31.587002  826514 command_runner.go:124] >         "k8s.gcr.io/pause@sha256:914e745e524aa94315a25b49a7fafc0aa395e332126930593225d7a513f5a6b2"
	I0813 00:04:31.587005  826514 command_runner.go:124] >       ],
	I0813 00:04:31.587010  826514 command_runner.go:124] >       "size": "689817",
	I0813 00:04:31.587014  826514 command_runner.go:124] >       "uid": null,
	I0813 00:04:31.587018  826514 command_runner.go:124] >       "username": "",
	I0813 00:04:31.587022  826514 command_runner.go:124] >       "spec": null
	I0813 00:04:31.587026  826514 command_runner.go:124] >     }
	I0813 00:04:31.587029  826514 command_runner.go:124] >   ]
	I0813 00:04:31.587032  826514 command_runner.go:124] > }
	I0813 00:04:31.587284  826514 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 00:04:31.587302  826514 crio.go:333] Images already preloaded, skipping extraction
	I0813 00:04:31.587360  826514 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 00:04:31.628479  826514 command_runner.go:124] > {
	I0813 00:04:31.628499  826514 command_runner.go:124] >   "images": [
	I0813 00:04:31.628509  826514 command_runner.go:124] >     {
	I0813 00:04:31.628518  826514 command_runner.go:124] >       "id": "6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb",
	I0813 00:04:31.628527  826514 command_runner.go:124] >       "repoTags": [
	I0813 00:04:31.628536  826514 command_runner.go:124] >         "docker.io/kindest/kindnetd:v20210326-1e038dc5"
	I0813 00:04:31.628542  826514 command_runner.go:124] >       ],
	I0813 00:04:31.628550  826514 command_runner.go:124] >       "repoDigests": [
	I0813 00:04:31.628563  826514 command_runner.go:124] >         "docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1",
	I0813 00:04:31.628575  826514 command_runner.go:124] >         "docker.io/kindest/kindnetd@sha256:838bc1706e38391aefaa31fd52619fe8e57ad3dfb0d0ff414d902367fcc24c3c"
	I0813 00:04:31.628581  826514 command_runner.go:124] >       ],
	I0813 00:04:31.628586  826514 command_runner.go:124] >       "size": "119984626",
	I0813 00:04:31.628592  826514 command_runner.go:124] >       "uid": null,
	I0813 00:04:31.628596  826514 command_runner.go:124] >       "username": "",
	I0813 00:04:31.628602  826514 command_runner.go:124] >       "spec": null
	I0813 00:04:31.628608  826514 command_runner.go:124] >     },
	I0813 00:04:31.628612  826514 command_runner.go:124] >     {
	I0813 00:04:31.628622  826514 command_runner.go:124] >       "id": "9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db",
	I0813 00:04:31.628631  826514 command_runner.go:124] >       "repoTags": [
	I0813 00:04:31.628640  826514 command_runner.go:124] >         "docker.io/kubernetesui/dashboard:v2.1.0"
	I0813 00:04:31.628648  826514 command_runner.go:124] >       ],
	I0813 00:04:31.628653  826514 command_runner.go:124] >       "repoDigests": [
	I0813 00:04:31.628663  826514 command_runner.go:124] >         "docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f",
	I0813 00:04:31.628674  826514 command_runner.go:124] >         "docker.io/kubernetesui/dashboard@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6"
	I0813 00:04:31.628678  826514 command_runner.go:124] >       ],
	I0813 00:04:31.628682  826514 command_runner.go:124] >       "size": "228528983",
	I0813 00:04:31.628687  826514 command_runner.go:124] >       "uid": null,
	I0813 00:04:31.628692  826514 command_runner.go:124] >       "username": "nonroot",
	I0813 00:04:31.628700  826514 command_runner.go:124] >       "spec": null
	I0813 00:04:31.628707  826514 command_runner.go:124] >     },
	I0813 00:04:31.628712  826514 command_runner.go:124] >     {
	I0813 00:04:31.628725  826514 command_runner.go:124] >       "id": "86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4",
	I0813 00:04:31.628743  826514 command_runner.go:124] >       "repoTags": [
	I0813 00:04:31.628755  826514 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper:v1.0.4"
	I0813 00:04:31.628760  826514 command_runner.go:124] >       ],
	I0813 00:04:31.628765  826514 command_runner.go:124] >       "repoDigests": [
	I0813 00:04:31.628774  826514 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper@sha256:555981a24f184420f3be0c79d4efb6c948a85cfce84034f85a563f4151a81cbf",
	I0813 00:04:31.628786  826514 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper@sha256:d78f995c07124874c2a2e9b404cffa6bc6233668d63d6c6210574971f3d5914b"
	I0813 00:04:31.628792  826514 command_runner.go:124] >       ],
	I0813 00:04:31.628796  826514 command_runner.go:124] >       "size": "36950651",
	I0813 00:04:31.628802  826514 command_runner.go:124] >       "uid": null,
	I0813 00:04:31.628814  826514 command_runner.go:124] >       "username": "",
	I0813 00:04:31.628824  826514 command_runner.go:124] >       "spec": null
	I0813 00:04:31.628829  826514 command_runner.go:124] >     },
	I0813 00:04:31.628835  826514 command_runner.go:124] >     {
	I0813 00:04:31.628846  826514 command_runner.go:124] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0813 00:04:31.628856  826514 command_runner.go:124] >       "repoTags": [
	I0813 00:04:31.628867  826514 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0813 00:04:31.628876  826514 command_runner.go:124] >       ],
	I0813 00:04:31.628882  826514 command_runner.go:124] >       "repoDigests": [
	I0813 00:04:31.628893  826514 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0813 00:04:31.628904  826514 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0813 00:04:31.628911  826514 command_runner.go:124] >       ],
	I0813 00:04:31.628918  826514 command_runner.go:124] >       "size": "31470524",
	I0813 00:04:31.628930  826514 command_runner.go:124] >       "uid": null,
	I0813 00:04:31.628939  826514 command_runner.go:124] >       "username": "",
	I0813 00:04:31.628947  826514 command_runner.go:124] >       "spec": null
	I0813 00:04:31.628952  826514 command_runner.go:124] >     },
	I0813 00:04:31.628960  826514 command_runner.go:124] >     {
	I0813 00:04:31.628971  826514 command_runner.go:124] >       "id": "296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899",
	I0813 00:04:31.628980  826514 command_runner.go:124] >       "repoTags": [
	I0813 00:04:31.628989  826514 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns:v1.8.0"
	I0813 00:04:31.628996  826514 command_runner.go:124] >       ],
	I0813 00:04:31.629001  826514 command_runner.go:124] >       "repoDigests": [
	I0813 00:04:31.629012  826514 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61",
	I0813 00:04:31.629027  826514 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e"
	I0813 00:04:31.629037  826514 command_runner.go:124] >       ],
	I0813 00:04:31.629043  826514 command_runner.go:124] >       "size": "42585056",
	I0813 00:04:31.629052  826514 command_runner.go:124] >       "uid": null,
	I0813 00:04:31.629059  826514 command_runner.go:124] >       "username": "",
	I0813 00:04:31.629070  826514 command_runner.go:124] >       "spec": null
	I0813 00:04:31.629077  826514 command_runner.go:124] >     },
	I0813 00:04:31.629082  826514 command_runner.go:124] >     {
	I0813 00:04:31.629091  826514 command_runner.go:124] >       "id": "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934",
	I0813 00:04:31.629097  826514 command_runner.go:124] >       "repoTags": [
	I0813 00:04:31.629108  826514 command_runner.go:124] >         "k8s.gcr.io/etcd:3.4.13-0"
	I0813 00:04:31.629115  826514 command_runner.go:124] >       ],
	I0813 00:04:31.629121  826514 command_runner.go:124] >       "repoDigests": [
	I0813 00:04:31.629136  826514 command_runner.go:124] >         "k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2",
	I0813 00:04:31.629150  826514 command_runner.go:124] >         "k8s.gcr.io/etcd@sha256:bd4d2c9a19be8a492bc79df53eee199fd04b415e9993eb69f7718052602a147a"
	I0813 00:04:31.629164  826514 command_runner.go:124] >       ],
	I0813 00:04:31.629173  826514 command_runner.go:124] >       "size": "254662613",
	I0813 00:04:31.629177  826514 command_runner.go:124] >       "uid": null,
	I0813 00:04:31.629186  826514 command_runner.go:124] >       "username": "",
	I0813 00:04:31.629194  826514 command_runner.go:124] >       "spec": null
	I0813 00:04:31.629201  826514 command_runner.go:124] >     },
	I0813 00:04:31.629206  826514 command_runner.go:124] >     {
	I0813 00:04:31.629216  826514 command_runner.go:124] >       "id": "3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80",
	I0813 00:04:31.629226  826514 command_runner.go:124] >       "repoTags": [
	I0813 00:04:31.629234  826514 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver:v1.21.3"
	I0813 00:04:31.629242  826514 command_runner.go:124] >       ],
	I0813 00:04:31.629248  826514 command_runner.go:124] >       "repoDigests": [
	I0813 00:04:31.629260  826514 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2",
	I0813 00:04:31.629274  826514 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver@sha256:910cfdf034262c7b68ecb17c0885f39bdaaad07d87c9a5b6320819d8500b7ee5"
	I0813 00:04:31.629283  826514 command_runner.go:124] >       ],
	I0813 00:04:31.629289  826514 command_runner.go:124] >       "size": "126878961",
	I0813 00:04:31.629297  826514 command_runner.go:124] >       "uid": {
	I0813 00:04:31.629303  826514 command_runner.go:124] >         "value": "0"
	I0813 00:04:31.629312  826514 command_runner.go:124] >       },
	I0813 00:04:31.629318  826514 command_runner.go:124] >       "username": "",
	I0813 00:04:31.629327  826514 command_runner.go:124] >       "spec": null
	I0813 00:04:31.629332  826514 command_runner.go:124] >     },
	I0813 00:04:31.629341  826514 command_runner.go:124] >     {
	I0813 00:04:31.629351  826514 command_runner.go:124] >       "id": "bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9",
	I0813 00:04:31.629360  826514 command_runner.go:124] >       "repoTags": [
	I0813 00:04:31.629368  826514 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager:v1.21.3"
	I0813 00:04:31.629377  826514 command_runner.go:124] >       ],
	I0813 00:04:31.629383  826514 command_runner.go:124] >       "repoDigests": [
	I0813 00:04:31.629397  826514 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b",
	I0813 00:04:31.629413  826514 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager@sha256:7fb1f6614597c255b475ed8abf553e0d4e8ea211b06a90bed53eaddcfb9c354f"
	I0813 00:04:31.629421  826514 command_runner.go:124] >       ],
	I0813 00:04:31.629459  826514 command_runner.go:124] >       "size": "121087578",
	I0813 00:04:31.629470  826514 command_runner.go:124] >       "uid": {
	I0813 00:04:31.629476  826514 command_runner.go:124] >         "value": "0"
	I0813 00:04:31.629481  826514 command_runner.go:124] >       },
	I0813 00:04:31.629525  826514 command_runner.go:124] >       "username": "",
	I0813 00:04:31.629534  826514 command_runner.go:124] >       "spec": null
	I0813 00:04:31.629540  826514 command_runner.go:124] >     },
	I0813 00:04:31.629546  826514 command_runner.go:124] >     {
	I0813 00:04:31.629563  826514 command_runner.go:124] >       "id": "adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92",
	I0813 00:04:31.629572  826514 command_runner.go:124] >       "repoTags": [
	I0813 00:04:31.629580  826514 command_runner.go:124] >         "k8s.gcr.io/kube-proxy:v1.21.3"
	I0813 00:04:31.629587  826514 command_runner.go:124] >       ],
	I0813 00:04:31.629594  826514 command_runner.go:124] >       "repoDigests": [
	I0813 00:04:31.629607  826514 command_runner.go:124] >         "k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b",
	I0813 00:04:31.629622  826514 command_runner.go:124] >         "k8s.gcr.io/kube-proxy@sha256:c7778d7b97b2a822c3fe3e921d104ac42afbd38268de8df03557465780886627"
	I0813 00:04:31.629630  826514 command_runner.go:124] >       ],
	I0813 00:04:31.629637  826514 command_runner.go:124] >       "size": "105129702",
	I0813 00:04:31.629646  826514 command_runner.go:124] >       "uid": null,
	I0813 00:04:31.629651  826514 command_runner.go:124] >       "username": "",
	I0813 00:04:31.629658  826514 command_runner.go:124] >       "spec": null
	I0813 00:04:31.629663  826514 command_runner.go:124] >     },
	I0813 00:04:31.629669  826514 command_runner.go:124] >     {
	I0813 00:04:31.629680  826514 command_runner.go:124] >       "id": "6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a",
	I0813 00:04:31.629688  826514 command_runner.go:124] >       "repoTags": [
	I0813 00:04:31.629697  826514 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler:v1.21.3"
	I0813 00:04:31.629705  826514 command_runner.go:124] >       ],
	I0813 00:04:31.629712  826514 command_runner.go:124] >       "repoDigests": [
	I0813 00:04:31.629731  826514 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4",
	I0813 00:04:31.629748  826514 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler@sha256:b61779ea1bd936c137b25b3a7baa5551fbbd84fed8568d15c7c85ab1139521c0"
	I0813 00:04:31.629757  826514 command_runner.go:124] >       ],
	I0813 00:04:31.629764  826514 command_runner.go:124] >       "size": "51893338",
	I0813 00:04:31.629771  826514 command_runner.go:124] >       "uid": {
	I0813 00:04:31.629778  826514 command_runner.go:124] >         "value": "0"
	I0813 00:04:31.629784  826514 command_runner.go:124] >       },
	I0813 00:04:31.629795  826514 command_runner.go:124] >       "username": "",
	I0813 00:04:31.629804  826514 command_runner.go:124] >       "spec": null
	I0813 00:04:31.629809  826514 command_runner.go:124] >     },
	I0813 00:04:31.629815  826514 command_runner.go:124] >     {
	I0813 00:04:31.629825  826514 command_runner.go:124] >       "id": "0f8457a4c2ecaceac160805013dc3c61c63a1ff3dee74a473a36249a748e0253",
	I0813 00:04:31.629834  826514 command_runner.go:124] >       "repoTags": [
	I0813 00:04:31.629841  826514 command_runner.go:124] >         "k8s.gcr.io/pause:3.4.1"
	I0813 00:04:31.629847  826514 command_runner.go:124] >       ],
	I0813 00:04:31.629852  826514 command_runner.go:124] >       "repoDigests": [
	I0813 00:04:31.629864  826514 command_runner.go:124] >         "k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810",
	I0813 00:04:31.629878  826514 command_runner.go:124] >         "k8s.gcr.io/pause@sha256:914e745e524aa94315a25b49a7fafc0aa395e332126930593225d7a513f5a6b2"
	I0813 00:04:31.629886  826514 command_runner.go:124] >       ],
	I0813 00:04:31.629893  826514 command_runner.go:124] >       "size": "689817",
	I0813 00:04:31.629912  826514 command_runner.go:124] >       "uid": null,
	I0813 00:04:31.629921  826514 command_runner.go:124] >       "username": "",
	I0813 00:04:31.629928  826514 command_runner.go:124] >       "spec": null
	I0813 00:04:31.629933  826514 command_runner.go:124] >     }
	I0813 00:04:31.629939  826514 command_runner.go:124] >   ]
	I0813 00:04:31.629945  826514 command_runner.go:124] > }
	I0813 00:04:31.630103  826514 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 00:04:31.630116  826514 cache_images.go:74] Images are preloaded, skipping loading
	I0813 00:04:31.630195  826514 ssh_runner.go:149] Run: crio config
	I0813 00:04:31.716486  826514 command_runner.go:124] > # The CRI-O configuration file specifies all of the available configuration
	I0813 00:04:31.716523  826514 command_runner.go:124] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0813 00:04:31.716534  826514 command_runner.go:124] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0813 00:04:31.716538  826514 command_runner.go:124] > #
	I0813 00:04:31.716549  826514 command_runner.go:124] > # Please refer to crio.conf(5) for details of all configuration options.
	I0813 00:04:31.716559  826514 command_runner.go:124] > # CRI-O supports partial configuration reload during runtime, which can be
	I0813 00:04:31.716569  826514 command_runner.go:124] > # done by sending SIGHUP to the running process. Currently supported options
	I0813 00:04:31.716584  826514 command_runner.go:124] > # are explicitly mentioned with: 'This option supports live configuration
	I0813 00:04:31.716598  826514 command_runner.go:124] > # reload'.
	I0813 00:04:31.716608  826514 command_runner.go:124] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0813 00:04:31.716620  826514 command_runner.go:124] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0813 00:04:31.716634  826514 command_runner.go:124] > # you want to change the system's defaults. If you want to modify storage just
	I0813 00:04:31.716657  826514 command_runner.go:124] > # for CRI-O, you can change the storage configuration options here.
	I0813 00:04:31.716664  826514 command_runner.go:124] > [crio]
	I0813 00:04:31.716675  826514 command_runner.go:124] > # Path to the "root directory". CRI-O stores all of its data, including
	I0813 00:04:31.716684  826514 command_runner.go:124] > # containers images, in this directory.
	I0813 00:04:31.716725  826514 command_runner.go:124] > #root = "/var/lib/containers/storage"
	I0813 00:04:31.716757  826514 command_runner.go:124] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0813 00:04:31.716776  826514 command_runner.go:124] > #runroot = "/var/run/containers/storage"
	I0813 00:04:31.716789  826514 command_runner.go:124] > # Storage driver used to manage the storage of images and containers. Please
	I0813 00:04:31.716803  826514 command_runner.go:124] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0813 00:04:31.716813  826514 command_runner.go:124] > #storage_driver = "overlay"
	I0813 00:04:31.716823  826514 command_runner.go:124] > # List to pass options to the storage driver. Please refer to
	I0813 00:04:31.716834  826514 command_runner.go:124] > # containers-storage.conf(5) to see all available storage options.
	I0813 00:04:31.716841  826514 command_runner.go:124] > #storage_option = [
	I0813 00:04:31.716846  826514 command_runner.go:124] > #]
	I0813 00:04:31.716858  826514 command_runner.go:124] > # The default log directory where all logs will go unless directly specified by
	I0813 00:04:31.716871  826514 command_runner.go:124] > # the kubelet. The log directory specified must be an absolute directory.
	I0813 00:04:31.716879  826514 command_runner.go:124] > log_dir = "/var/log/crio/pods"
	I0813 00:04:31.716888  826514 command_runner.go:124] > # Location for CRI-O to lay down the temporary version file.
	I0813 00:04:31.716910  826514 command_runner.go:124] > # It is used to check if crio wipe should wipe containers, which should
	I0813 00:04:31.716920  826514 command_runner.go:124] > # always happen on a node reboot
	I0813 00:04:31.716928  826514 command_runner.go:124] > version_file = "/var/run/crio/version"
	I0813 00:04:31.716940  826514 command_runner.go:124] > # Location for CRI-O to lay down the persistent version file.
	I0813 00:04:31.716950  826514 command_runner.go:124] > # It is used to check if crio wipe should wipe images, which should
	I0813 00:04:31.716958  826514 command_runner.go:124] > # only happen when CRI-O has been upgraded
	I0813 00:04:31.716974  826514 command_runner.go:124] > version_file_persist = "/var/lib/crio/version"
	I0813 00:04:31.716986  826514 command_runner.go:124] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0813 00:04:31.716992  826514 command_runner.go:124] > [crio.api]
	I0813 00:04:31.717001  826514 command_runner.go:124] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0813 00:04:31.717011  826514 command_runner.go:124] > listen = "/var/run/crio/crio.sock"
	I0813 00:04:31.717020  826514 command_runner.go:124] > # IP address on which the stream server will listen.
	I0813 00:04:31.717029  826514 command_runner.go:124] > stream_address = "127.0.0.1"
	I0813 00:04:31.717041  826514 command_runner.go:124] > # The port on which the stream server will listen. If the port is set to "0", then
	I0813 00:04:31.717051  826514 command_runner.go:124] > # CRI-O will allocate a random free port number.
	I0813 00:04:31.717058  826514 command_runner.go:124] > stream_port = "0"
	I0813 00:04:31.717069  826514 command_runner.go:124] > # Enable encrypted TLS transport of the stream server.
	I0813 00:04:31.717075  826514 command_runner.go:124] > stream_enable_tls = false
	I0813 00:04:31.717084  826514 command_runner.go:124] > # Length of time until open streams terminate due to lack of activity
	I0813 00:04:31.717090  826514 command_runner.go:124] > stream_idle_timeout = ""
	I0813 00:04:31.717099  826514 command_runner.go:124] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0813 00:04:31.717110  826514 command_runner.go:124] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0813 00:04:31.717116  826514 command_runner.go:124] > # minutes.
	I0813 00:04:31.717122  826514 command_runner.go:124] > stream_tls_cert = ""
	I0813 00:04:31.717131  826514 command_runner.go:124] > # Path to the key file used to serve the encrypted stream. This file can
	I0813 00:04:31.717142  826514 command_runner.go:124] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0813 00:04:31.717148  826514 command_runner.go:124] > stream_tls_key = ""
	I0813 00:04:31.717163  826514 command_runner.go:124] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0813 00:04:31.717174  826514 command_runner.go:124] > # communication with the encrypted stream. This file can change and CRI-O will
	I0813 00:04:31.717187  826514 command_runner.go:124] > # automatically pick up the changes within 5 minutes.
	I0813 00:04:31.717193  826514 command_runner.go:124] > stream_tls_ca = ""
	I0813 00:04:31.717207  826514 command_runner.go:124] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0813 00:04:31.717217  826514 command_runner.go:124] > grpc_max_send_msg_size = 16777216
	I0813 00:04:31.717236  826514 command_runner.go:124] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0813 00:04:31.717246  826514 command_runner.go:124] > grpc_max_recv_msg_size = 16777216
	I0813 00:04:31.717257  826514 command_runner.go:124] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0813 00:04:31.717268  826514 command_runner.go:124] > # and options for how to set up and manage the OCI runtime.
	I0813 00:04:31.717274  826514 command_runner.go:124] > [crio.runtime]
	I0813 00:04:31.717285  826514 command_runner.go:124] > # A list of ulimits to be set in containers by default, specified as
	I0813 00:04:31.717295  826514 command_runner.go:124] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0813 00:04:31.717302  826514 command_runner.go:124] > # "nofile=1024:2048"
	I0813 00:04:31.717312  826514 command_runner.go:124] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0813 00:04:31.717322  826514 command_runner.go:124] > #default_ulimits = [
	I0813 00:04:31.717329  826514 command_runner.go:124] > #]
	I0813 00:04:31.717340  826514 command_runner.go:124] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0813 00:04:31.717349  826514 command_runner.go:124] > no_pivot = false
	I0813 00:04:31.717358  826514 command_runner.go:124] > # decryption_keys_path is the path where the keys required for
	I0813 00:04:31.717405  826514 command_runner.go:124] > # image decryption are stored. This option supports live configuration reload.
	I0813 00:04:31.717416  826514 command_runner.go:124] > decryption_keys_path = "/etc/crio/keys/"
	I0813 00:04:31.717425  826514 command_runner.go:124] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0813 00:04:31.717434  826514 command_runner.go:124] > # Will be searched for using $PATH if empty.
	I0813 00:04:31.717444  826514 command_runner.go:124] > conmon = "/usr/libexec/crio/conmon"
	I0813 00:04:31.717452  826514 command_runner.go:124] > # Cgroup setting for conmon
	I0813 00:04:31.717461  826514 command_runner.go:124] > conmon_cgroup = "system.slice"
	I0813 00:04:31.717472  826514 command_runner.go:124] > # Environment variable list for the conmon process, used for passing necessary
	I0813 00:04:31.717483  826514 command_runner.go:124] > # environment variables to conmon or the runtime.
	I0813 00:04:31.717489  826514 command_runner.go:124] > conmon_env = [
	I0813 00:04:31.717499  826514 command_runner.go:124] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0813 00:04:31.717507  826514 command_runner.go:124] > ]
	I0813 00:04:31.717516  826514 command_runner.go:124] > # Additional environment variables to set for all the
	I0813 00:04:31.717531  826514 command_runner.go:124] > # containers. These are overridden if set in the
	I0813 00:04:31.717543  826514 command_runner.go:124] > # container image spec or in the container runtime configuration.
	I0813 00:04:31.717549  826514 command_runner.go:124] > default_env = [
	I0813 00:04:31.717554  826514 command_runner.go:124] > ]
	I0813 00:04:31.717564  826514 command_runner.go:124] > # If true, SELinux will be used for pod separation on the host.
	I0813 00:04:31.717573  826514 command_runner.go:124] > selinux = false
	I0813 00:04:31.717590  826514 command_runner.go:124] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0813 00:04:31.717604  826514 command_runner.go:124] > # for the runtime. If not specified, then the internal default seccomp profile
	I0813 00:04:31.717614  826514 command_runner.go:124] > # will be used. This option supports live configuration reload.
	I0813 00:04:31.717621  826514 command_runner.go:124] > seccomp_profile = ""
	I0813 00:04:31.717631  826514 command_runner.go:124] > # Changes the meaning of an empty seccomp profile. By default
	I0813 00:04:31.717643  826514 command_runner.go:124] > # (and according to CRI spec), an empty profile means unconfined.
	I0813 00:04:31.717653  826514 command_runner.go:124] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0813 00:04:31.717663  826514 command_runner.go:124] > # which might increase security.
	I0813 00:04:31.717671  826514 command_runner.go:124] > seccomp_use_default_when_empty = false
	I0813 00:04:31.717684  826514 command_runner.go:124] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0813 00:04:31.717695  826514 command_runner.go:124] > # profile name is "crio-default". This profile only takes effect if the user
	I0813 00:04:31.717707  826514 command_runner.go:124] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0813 00:04:31.717721  826514 command_runner.go:124] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0813 00:04:31.717733  826514 command_runner.go:124] > # This option supports live configuration reload.
	I0813 00:04:31.717740  826514 command_runner.go:124] > apparmor_profile = "crio-default"
	I0813 00:04:31.717754  826514 command_runner.go:124] > # Used to change irqbalance service config file path which is used for configuring
	I0813 00:04:31.717761  826514 command_runner.go:124] > # irqbalance daemon.
	I0813 00:04:31.717773  826514 command_runner.go:124] > irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0813 00:04:31.717782  826514 command_runner.go:124] > # Cgroup management implementation used for the runtime.
	I0813 00:04:31.717790  826514 command_runner.go:124] > cgroup_manager = "systemd"
	I0813 00:04:31.717800  826514 command_runner.go:124] > # Specify whether the image pull must be performed in a separate cgroup.
	I0813 00:04:31.717810  826514 command_runner.go:124] > separate_pull_cgroup = ""
	I0813 00:04:31.717821  826514 command_runner.go:124] > # List of default capabilities for containers. If it is empty or commented out,
	I0813 00:04:31.717834  826514 command_runner.go:124] > # only the capabilities defined in the containers json file by the user/kube
	I0813 00:04:31.717842  826514 command_runner.go:124] > # will be added.
	I0813 00:04:31.717849  826514 command_runner.go:124] > default_capabilities = [
	I0813 00:04:31.717855  826514 command_runner.go:124] > 	"CHOWN",
	I0813 00:04:31.717861  826514 command_runner.go:124] > 	"DAC_OVERRIDE",
	I0813 00:04:31.717867  826514 command_runner.go:124] > 	"FSETID",
	I0813 00:04:31.717873  826514 command_runner.go:124] > 	"FOWNER",
	I0813 00:04:31.717879  826514 command_runner.go:124] > 	"SETGID",
	I0813 00:04:31.717885  826514 command_runner.go:124] > 	"SETUID",
	I0813 00:04:31.717891  826514 command_runner.go:124] > 	"SETPCAP",
	I0813 00:04:31.717897  826514 command_runner.go:124] > 	"NET_BIND_SERVICE",
	I0813 00:04:31.717905  826514 command_runner.go:124] > 	"KILL",
	I0813 00:04:31.717913  826514 command_runner.go:124] > ]
	I0813 00:04:31.717926  826514 command_runner.go:124] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0813 00:04:31.717939  826514 command_runner.go:124] > # defined in the container json file by the user/kube will be added.
	I0813 00:04:31.717945  826514 command_runner.go:124] > default_sysctls = [
	I0813 00:04:31.717957  826514 command_runner.go:124] > ]
	I0813 00:04:31.717968  826514 command_runner.go:124] > # List of additional devices. specified as
	I0813 00:04:31.717982  826514 command_runner.go:124] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0813 00:04:31.717993  826514 command_runner.go:124] > #If it is empty or commented out, only the devices
	I0813 00:04:31.718004  826514 command_runner.go:124] > # defined in the container json file by the user/kube will be added.
	I0813 00:04:31.718012  826514 command_runner.go:124] > additional_devices = [
	I0813 00:04:31.718017  826514 command_runner.go:124] > ]
	I0813 00:04:31.718028  826514 command_runner.go:124] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0813 00:04:31.718038  826514 command_runner.go:124] > # directories does not exist, then CRI-O will automatically skip them.
	I0813 00:04:31.718046  826514 command_runner.go:124] > hooks_dir = [
	I0813 00:04:31.718053  826514 command_runner.go:124] > 	"/usr/share/containers/oci/hooks.d",
	I0813 00:04:31.718060  826514 command_runner.go:124] > ]
	I0813 00:04:31.718070  826514 command_runner.go:124] > # Path to the file specifying the defaults mounts for each container. The
	I0813 00:04:31.718084  826514 command_runner.go:124] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0813 00:04:31.718095  826514 command_runner.go:124] > # its default mounts from the following two files:
	I0813 00:04:31.718100  826514 command_runner.go:124] > #
	I0813 00:04:31.718110  826514 command_runner.go:124] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0813 00:04:31.718123  826514 command_runner.go:124] > #      override file, where users can either add in their own default mounts, or
	I0813 00:04:31.718132  826514 command_runner.go:124] > #      override the default mounts shipped with the package.
	I0813 00:04:31.718140  826514 command_runner.go:124] > #
	I0813 00:04:31.718152  826514 command_runner.go:124] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0813 00:04:31.718166  826514 command_runner.go:124] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0813 00:04:31.718179  826514 command_runner.go:124] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0813 00:04:31.718190  826514 command_runner.go:124] > #      only add mounts it finds in this file.
	I0813 00:04:31.718195  826514 command_runner.go:124] > #
	I0813 00:04:31.718202  826514 command_runner.go:124] > #default_mounts_file = ""
	I0813 00:04:31.718210  826514 command_runner.go:124] > # Maximum number of processes allowed in a container.
	I0813 00:04:31.718216  826514 command_runner.go:124] > pids_limit = 1024
	I0813 00:04:31.718230  826514 command_runner.go:124] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0813 00:04:31.718244  826514 command_runner.go:124] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0813 00:04:31.718255  826514 command_runner.go:124] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0813 00:04:31.718286  826514 command_runner.go:124] > # limit is never exceeded.
	I0813 00:04:31.718295  826514 command_runner.go:124] > log_size_max = -1
	I0813 00:04:31.718356  826514 command_runner.go:124] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0813 00:04:31.718366  826514 command_runner.go:124] > log_to_journald = false
	I0813 00:04:31.718375  826514 command_runner.go:124] > # Path to directory in which container exit files are written to by conmon.
	I0813 00:04:31.718386  826514 command_runner.go:124] > container_exits_dir = "/var/run/crio/exits"
	I0813 00:04:31.718397  826514 command_runner.go:124] > # Path to directory for container attach sockets.
	I0813 00:04:31.718406  826514 command_runner.go:124] > container_attach_socket_dir = "/var/run/crio"
	I0813 00:04:31.718424  826514 command_runner.go:124] > # The prefix to use for the source of the bind mounts.
	I0813 00:04:31.718433  826514 command_runner.go:124] > bind_mount_prefix = ""
	I0813 00:04:31.718442  826514 command_runner.go:124] > # If set to true, all containers will run in read-only mode.
	I0813 00:04:31.718451  826514 command_runner.go:124] > read_only = false
	I0813 00:04:31.718461  826514 command_runner.go:124] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0813 00:04:31.718474  826514 command_runner.go:124] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0813 00:04:31.718482  826514 command_runner.go:124] > # live configuration reload.
	I0813 00:04:31.718488  826514 command_runner.go:124] > log_level = "info"
	I0813 00:04:31.718497  826514 command_runner.go:124] > # Filter the log messages by the provided regular expression.
	I0813 00:04:31.718506  826514 command_runner.go:124] > # This option supports live configuration reload.
	I0813 00:04:31.718514  826514 command_runner.go:124] > log_filter = ""
	I0813 00:04:31.718524  826514 command_runner.go:124] > # The UID mappings for the user namespace of each container. A range is
	I0813 00:04:31.718537  826514 command_runner.go:124] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0813 00:04:31.718548  826514 command_runner.go:124] > # separated by comma.
	I0813 00:04:31.718555  826514 command_runner.go:124] > uid_mappings = ""
	I0813 00:04:31.718566  826514 command_runner.go:124] > # The GID mappings for the user namespace of each container. A range is
	I0813 00:04:31.718579  826514 command_runner.go:124] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0813 00:04:31.718586  826514 command_runner.go:124] > # separated by comma.
	I0813 00:04:31.718592  826514 command_runner.go:124] > gid_mappings = ""
	I0813 00:04:31.718603  826514 command_runner.go:124] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0813 00:04:31.718613  826514 command_runner.go:124] > # regarding the proper termination of the container. The lowest possible
	I0813 00:04:31.718625  826514 command_runner.go:124] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0813 00:04:31.718632  826514 command_runner.go:124] > ctr_stop_timeout = 30
	I0813 00:04:31.718642  826514 command_runner.go:124] > # manage_ns_lifecycle determines whether we pin and remove namespaces
	I0813 00:04:31.718649  826514 command_runner.go:124] > # and manage their lifecycle.
	I0813 00:04:31.718660  826514 command_runner.go:124] > # This option is being deprecated, and will be unconditionally true in the future.
	I0813 00:04:31.718670  826514 command_runner.go:124] > manage_ns_lifecycle = true
	I0813 00:04:31.718681  826514 command_runner.go:124] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0813 00:04:31.718691  826514 command_runner.go:124] > # when a pod does not have a private PID namespace, and does not use
	I0813 00:04:31.718699  826514 command_runner.go:124] > # a kernel separating runtime (like kata).
	I0813 00:04:31.718708  826514 command_runner.go:124] > # It requires manage_ns_lifecycle to be true.
	I0813 00:04:31.718717  826514 command_runner.go:124] > drop_infra_ctr = false
	I0813 00:04:31.718728  826514 command_runner.go:124] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0813 00:04:31.718740  826514 command_runner.go:124] > # You can use linux CPU list format to specify desired CPUs.
	I0813 00:04:31.718753  826514 command_runner.go:124] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0813 00:04:31.718765  826514 command_runner.go:124] > # infra_ctr_cpuset = ""
	I0813 00:04:31.718775  826514 command_runner.go:124] > # The directory where the state of the managed namespaces gets tracked.
	I0813 00:04:31.718786  826514 command_runner.go:124] > # Only used when manage_ns_lifecycle is true.
	I0813 00:04:31.718793  826514 command_runner.go:124] > namespaces_dir = "/var/run"
	I0813 00:04:31.718808  826514 command_runner.go:124] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0813 00:04:31.718817  826514 command_runner.go:124] > pinns_path = "/usr/bin/pinns"
	I0813 00:04:31.718827  826514 command_runner.go:124] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0813 00:04:31.718835  826514 command_runner.go:124] > # The name is matched against the runtimes map below. If this value is changed,
	I0813 00:04:31.718845  826514 command_runner.go:124] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0813 00:04:31.718852  826514 command_runner.go:124] > default_runtime = "runc"
	I0813 00:04:31.718862  826514 command_runner.go:124] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0813 00:04:31.718873  826514 command_runner.go:124] > # The runtime to use is picked based on the runtime_handler provided by the CRI.
	I0813 00:04:31.718884  826514 command_runner.go:124] > # If no runtime_handler is provided, the runtime will be picked based on the level
	I0813 00:04:31.718894  826514 command_runner.go:124] > # of trust of the workload. Each entry in the table should follow the format:
	I0813 00:04:31.718899  826514 command_runner.go:124] > #
	I0813 00:04:31.718907  826514 command_runner.go:124] > #[crio.runtime.runtimes.runtime-handler]
	I0813 00:04:31.718915  826514 command_runner.go:124] > #  runtime_path = "/path/to/the/executable"
	I0813 00:04:31.718923  826514 command_runner.go:124] > #  runtime_type = "oci"
	I0813 00:04:31.718931  826514 command_runner.go:124] > #  runtime_root = "/path/to/the/root"
	I0813 00:04:31.718939  826514 command_runner.go:124] > #  privileged_without_host_devices = false
	I0813 00:04:31.718946  826514 command_runner.go:124] > #  allowed_annotations = []
	I0813 00:04:31.718952  826514 command_runner.go:124] > # Where:
	I0813 00:04:31.718960  826514 command_runner.go:124] > # - runtime-handler: name used to identify the runtime
	I0813 00:04:31.718971  826514 command_runner.go:124] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0813 00:04:31.718982  826514 command_runner.go:124] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0813 00:04:31.718992  826514 command_runner.go:124] > #   the runtime executable name, and the runtime executable should be placed
	I0813 00:04:31.718998  826514 command_runner.go:124] > #   in $PATH.
	I0813 00:04:31.719009  826514 command_runner.go:124] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0813 00:04:31.719018  826514 command_runner.go:124] > #   omitted, an "oci" runtime is assumed.
	I0813 00:04:31.719030  826514 command_runner.go:124] > # - runtime_root (optional, string): root directory for storage of containers
	I0813 00:04:31.719036  826514 command_runner.go:124] > #   state.
	I0813 00:04:31.719047  826514 command_runner.go:124] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0813 00:04:31.719059  826514 command_runner.go:124] > #   host devices from being passed to privileged containers.
	I0813 00:04:31.719071  826514 command_runner.go:124] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0813 00:04:31.719112  826514 command_runner.go:124] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0813 00:04:31.719123  826514 command_runner.go:124] > #   The currently recognized values are:
	I0813 00:04:31.719134  826514 command_runner.go:124] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0813 00:04:31.719150  826514 command_runner.go:124] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0813 00:04:31.719163  826514 command_runner.go:124] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0813 00:04:31.719173  826514 command_runner.go:124] > [crio.runtime.runtimes.runc]
	I0813 00:04:31.719180  826514 command_runner.go:124] > runtime_path = "/usr/bin/runc"
	I0813 00:04:31.719187  826514 command_runner.go:124] > runtime_type = "oci"
	I0813 00:04:31.719194  826514 command_runner.go:124] > runtime_root = "/run/runc"
	I0813 00:04:31.719210  826514 command_runner.go:124] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0813 00:04:31.719219  826514 command_runner.go:124] > # running containers
	I0813 00:04:31.719231  826514 command_runner.go:124] > #[crio.runtime.runtimes.crun]
	I0813 00:04:31.719246  826514 command_runner.go:124] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0813 00:04:31.719260  826514 command_runner.go:124] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0813 00:04:31.719272  826514 command_runner.go:124] > # surface and mitigating the consequences of containers breakout.
	I0813 00:04:31.719280  826514 command_runner.go:124] > # Kata Containers with the default configured VMM
	I0813 00:04:31.719290  826514 command_runner.go:124] > #[crio.runtime.runtimes.kata-runtime]
	I0813 00:04:31.719299  826514 command_runner.go:124] > # Kata Containers with the QEMU VMM
	I0813 00:04:31.719310  826514 command_runner.go:124] > #[crio.runtime.runtimes.kata-qemu]
	I0813 00:04:31.719320  826514 command_runner.go:124] > # Kata Containers with the Firecracker VMM
	I0813 00:04:31.719327  826514 command_runner.go:124] > #[crio.runtime.runtimes.kata-fc]
	I0813 00:04:31.719339  826514 command_runner.go:124] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0813 00:04:31.719346  826514 command_runner.go:124] > #
	I0813 00:04:31.719356  826514 command_runner.go:124] > # CRI-O reads its configured registries defaults from the system wide
	I0813 00:04:31.719369  826514 command_runner.go:124] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0813 00:04:31.719380  826514 command_runner.go:124] > # you want to modify just CRI-O, you can change the registries configuration in
	I0813 00:04:31.719393  826514 command_runner.go:124] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0813 00:04:31.719405  826514 command_runner.go:124] > # use the system's defaults from /etc/containers/registries.conf.
	I0813 00:04:31.719411  826514 command_runner.go:124] > [crio.image]
	I0813 00:04:31.719422  826514 command_runner.go:124] > # Default transport for pulling images from a remote container storage.
	I0813 00:04:31.719431  826514 command_runner.go:124] > default_transport = "docker://"
	I0813 00:04:31.719442  826514 command_runner.go:124] > # The path to a file containing credentials necessary for pulling images from
	I0813 00:04:31.719454  826514 command_runner.go:124] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0813 00:04:31.719461  826514 command_runner.go:124] > global_auth_file = ""
	I0813 00:04:31.719470  826514 command_runner.go:124] > # The image used to instantiate infra containers.
	I0813 00:04:31.719481  826514 command_runner.go:124] > # This option supports live configuration reload.
	I0813 00:04:31.719489  826514 command_runner.go:124] > pause_image = "k8s.gcr.io/pause:3.4.1"
	I0813 00:04:31.719500  826514 command_runner.go:124] > # The path to a file containing credentials specific for pulling the pause_image from
	I0813 00:04:31.719512  826514 command_runner.go:124] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0813 00:04:31.719522  826514 command_runner.go:124] > # This option supports live configuration reload.
	I0813 00:04:31.719528  826514 command_runner.go:124] > pause_image_auth_file = ""
	I0813 00:04:31.719538  826514 command_runner.go:124] > # The command to run to have a container stay in the paused state.
	I0813 00:04:31.719556  826514 command_runner.go:124] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0813 00:04:31.719569  826514 command_runner.go:124] > # specified in the pause image. When commented out, it will fallback to the
	I0813 00:04:31.719584  826514 command_runner.go:124] > # default: "/pause". This option supports live configuration reload.
	I0813 00:04:31.719594  826514 command_runner.go:124] > pause_command = "/pause"
	I0813 00:04:31.719605  826514 command_runner.go:124] > # Path to the file which decides what sort of policy we use when deciding
	I0813 00:04:31.719618  826514 command_runner.go:124] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0813 00:04:31.719630  826514 command_runner.go:124] > # this option be used, as the default behavior of using the system-wide default
	I0813 00:04:31.719640  826514 command_runner.go:124] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0813 00:04:31.719649  826514 command_runner.go:124] > # refer to containers-policy.json(5) for more details.
	I0813 00:04:31.719658  826514 command_runner.go:124] > signature_policy = ""
	I0813 00:04:31.719669  826514 command_runner.go:124] > # List of registries to skip TLS verification for pulling images. Please
	I0813 00:04:31.719681  826514 command_runner.go:124] > # consider configuring the registries via /etc/containers/registries.conf before
	I0813 00:04:31.719689  826514 command_runner.go:124] > # changing them here.
	I0813 00:04:31.719697  826514 command_runner.go:124] > #insecure_registries = "[]"
	I0813 00:04:31.719723  826514 command_runner.go:124] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0813 00:04:31.719737  826514 command_runner.go:124] > # ignore; the latter will ignore volumes entirely.
	I0813 00:04:31.719743  826514 command_runner.go:124] > image_volumes = "mkdir"
	I0813 00:04:31.719754  826514 command_runner.go:124] > # List of registries to be used when pulling an unqualified image (e.g.,
	I0813 00:04:31.719766  826514 command_runner.go:124] > # "alpine:latest"). By default, registries is set to "docker.io" for
	I0813 00:04:31.719777  826514 command_runner.go:124] > # compatibility reasons. Depending on your workload and usecase you may add more
	I0813 00:04:31.719786  826514 command_runner.go:124] > # registries (e.g., "quay.io", "registry.fedoraproject.org",
	I0813 00:04:31.719796  826514 command_runner.go:124] > # "registry.opensuse.org", etc.).
	I0813 00:04:31.719803  826514 command_runner.go:124] > #registries = [
	I0813 00:04:31.719809  826514 command_runner.go:124] > # 	"docker.io",
	I0813 00:04:31.719814  826514 command_runner.go:124] > #]
	I0813 00:04:31.719824  826514 command_runner.go:124] > # Temporary directory to use for storing big files
	I0813 00:04:31.719832  826514 command_runner.go:124] > big_files_temporary_dir = ""
	I0813 00:04:31.719843  826514 command_runner.go:124] > # The crio.network table containers settings pertaining to the management of
	I0813 00:04:31.719850  826514 command_runner.go:124] > # CNI plugins.
	I0813 00:04:31.719856  826514 command_runner.go:124] > [crio.network]
	I0813 00:04:31.719866  826514 command_runner.go:124] > # The default CNI network name to be selected. If not set or "", then
	I0813 00:04:31.719875  826514 command_runner.go:124] > # CRI-O will pick-up the first one found in network_dir.
	I0813 00:04:31.719883  826514 command_runner.go:124] > # cni_default_network = "kindnet"
	I0813 00:04:31.719895  826514 command_runner.go:124] > # Path to the directory where CNI configuration files are located.
	I0813 00:04:31.719905  826514 command_runner.go:124] > network_dir = "/etc/cni/net.d/"
	I0813 00:04:31.719914  826514 command_runner.go:124] > # Paths to directories where CNI plugin binaries are located.
	I0813 00:04:31.719927  826514 command_runner.go:124] > plugin_dirs = [
	I0813 00:04:31.719933  826514 command_runner.go:124] > 	"/opt/cni/bin/",
	I0813 00:04:31.719938  826514 command_runner.go:124] > ]
	I0813 00:04:31.719952  826514 command_runner.go:124] > # A necessary configuration for Prometheus based metrics retrieval
	I0813 00:04:31.719961  826514 command_runner.go:124] > [crio.metrics]
	I0813 00:04:31.719969  826514 command_runner.go:124] > # Globally enable or disable metrics support.
	I0813 00:04:31.719978  826514 command_runner.go:124] > enable_metrics = true
	I0813 00:04:31.719987  826514 command_runner.go:124] > # The port on which the metrics server will listen.
	I0813 00:04:31.719997  826514 command_runner.go:124] > metrics_port = 9090
	I0813 00:04:31.720031  826514 command_runner.go:124] > # Local socket path to bind the metrics server to
	I0813 00:04:31.720040  826514 command_runner.go:124] > metrics_socket = ""
	I0813 00:04:31.720089  826514 command_runner.go:124] ! time="2021-08-13T00:04:31Z" level=info msg="Starting CRI-O, version: 1.20.2, git: d5a999ad0a35d895ded554e1e18c142075501a98(clean)"
	I0813 00:04:31.720112  826514 command_runner.go:124] ! time="2021-08-13T00:04:31Z" level=warning msg="The 'registries' option in crio.conf(5) (referenced in \"/etc/crio/crio.conf\") has been deprecated and will be removed with CRI-O 1.21."
	I0813 00:04:31.720129  826514 command_runner.go:124] ! time="2021-08-13T00:04:31Z" level=warning msg="Please refer to containers-registries.conf(5) for configuring unqualified-search registries."
	I0813 00:04:31.720150  826514 command_runner.go:124] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0813 00:04:31.720223  826514 cni.go:93] Creating CNI manager for ""
	I0813 00:04:31.720243  826514 cni.go:154] 1 nodes found, recommending kindnet
	I0813 00:04:31.720298  826514 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 00:04:31.720320  826514 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.22 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-20210813000359-820289 NodeName:multinode-20210813000359-820289 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.22"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.39.22 CgroupDriver:systemd ClientCAFile:
/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 00:04:31.720474  826514 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.22
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "multinode-20210813000359-820289"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.22
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.22"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 00:04:31.720582  826514 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --hostname-override=multinode-20210813000359-820289 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.22 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:multinode-20210813000359-820289 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 00:04:31.720645  826514 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 00:04:31.727820  826514 command_runner.go:124] > kubeadm
	I0813 00:04:31.727842  826514 command_runner.go:124] > kubectl
	I0813 00:04:31.727848  826514 command_runner.go:124] > kubelet
	I0813 00:04:31.728062  826514 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 00:04:31.728127  826514 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 00:04:31.734759  826514 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (511 bytes)
	I0813 00:04:31.746715  826514 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 00:04:31.758298  826514 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2075 bytes)
	I0813 00:04:31.769853  826514 ssh_runner.go:149] Run: grep 192.168.39.22	control-plane.minikube.internal$ /etc/hosts
	I0813 00:04:31.773705  826514 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.22	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 00:04:31.783812  826514 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289 for IP: 192.168.39.22
	I0813 00:04:31.783864  826514 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key
	I0813 00:04:31.783891  826514 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key
	I0813 00:04:31.783950  826514 certs.go:294] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/client.key
	I0813 00:04:31.783972  826514 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/client.crt with IP's: []
	I0813 00:04:32.037921  826514 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/client.crt ...
	I0813 00:04:32.037950  826514 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/client.crt: {Name:mkf5df9641ea11c906574d810c1c29529a170608 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:04:32.038175  826514 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/client.key ...
	I0813 00:04:32.038188  826514 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/client.key: {Name:mkcd99f24d88fe2629fca2746c101b315deedb23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:04:32.038275  826514 certs.go:294] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/apiserver.key.a67f3e8b
	I0813 00:04:32.038289  826514 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/apiserver.crt.a67f3e8b with IP's: [192.168.39.22 10.96.0.1 127.0.0.1 10.0.0.1]
	I0813 00:04:32.287488  826514 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/apiserver.crt.a67f3e8b ...
	I0813 00:04:32.287529  826514 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/apiserver.crt.a67f3e8b: {Name:mkc518b64a186f076d19e8c89346facb1c87f59b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:04:32.287777  826514 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/apiserver.key.a67f3e8b ...
	I0813 00:04:32.287800  826514 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/apiserver.key.a67f3e8b: {Name:mk74b9e9b7324dd81ddf9c84974f48d24be1bf6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:04:32.287932  826514 certs.go:305] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/apiserver.crt.a67f3e8b -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/apiserver.crt
	I0813 00:04:32.288013  826514 certs.go:309] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/apiserver.key.a67f3e8b -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/apiserver.key
	I0813 00:04:32.288082  826514 certs.go:294] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/proxy-client.key
	I0813 00:04:32.288160  826514 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/proxy-client.crt with IP's: []
	I0813 00:04:32.394515  826514 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/proxy-client.crt ...
	I0813 00:04:32.394547  826514 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/proxy-client.crt: {Name:mkc269edc225dfbdbe858effb5699acd067027fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:04:32.394728  826514 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/proxy-client.key ...
	I0813 00:04:32.394741  826514 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/proxy-client.key: {Name:mkcaf59dee6bab4e8620b2e8ba22d6f73f0031eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:04:32.394817  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0813 00:04:32.394836  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0813 00:04:32.394845  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0813 00:04:32.394856  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0813 00:04:32.394865  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0813 00:04:32.394883  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0813 00:04:32.394895  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0813 00:04:32.394906  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0813 00:04:32.394960  826514 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/820289.pem (1338 bytes)
	W0813 00:04:32.395003  826514 certs.go:369] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/820289_empty.pem, impossibly tiny 0 bytes
	I0813 00:04:32.395019  826514 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 00:04:32.395044  826514 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem (1078 bytes)
	I0813 00:04:32.395075  826514 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem (1123 bytes)
	I0813 00:04:32.395099  826514 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem (1679 bytes)
	I0813 00:04:32.395140  826514 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/8202892.pem (1708 bytes)
	I0813 00:04:32.395168  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/8202892.pem -> /usr/share/ca-certificates/8202892.pem
	I0813 00:04:32.395182  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0813 00:04:32.395191  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/820289.pem -> /usr/share/ca-certificates/820289.pem
	I0813 00:04:32.396132  826514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 00:04:32.413144  826514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0813 00:04:32.429646  826514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 00:04:32.446026  826514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0813 00:04:32.463119  826514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 00:04:32.479069  826514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0813 00:04:32.494793  826514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 00:04:32.510640  826514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 00:04:32.526595  826514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/8202892.pem --> /usr/share/ca-certificates/8202892.pem (1708 bytes)
	I0813 00:04:32.543031  826514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 00:04:32.558714  826514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/820289.pem --> /usr/share/ca-certificates/820289.pem (1338 bytes)
	I0813 00:04:32.574494  826514 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 00:04:32.586113  826514 ssh_runner.go:149] Run: openssl version
	I0813 00:04:32.592091  826514 command_runner.go:124] > OpenSSL 1.1.1k  25 Mar 2021
	I0813 00:04:32.592152  826514 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8202892.pem && ln -fs /usr/share/ca-certificates/8202892.pem /etc/ssl/certs/8202892.pem"
	I0813 00:04:32.599913  826514 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8202892.pem
	I0813 00:04:32.604566  826514 command_runner.go:124] > -rw-r--r-- 1 root root 1708 Aug 12 23:59 /usr/share/ca-certificates/8202892.pem
	I0813 00:04:32.604598  826514 certs.go:416] hashing: -rw-r--r-- 1 root root 1708 Aug 12 23:59 /usr/share/ca-certificates/8202892.pem
	I0813 00:04:32.604653  826514 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8202892.pem
	I0813 00:04:32.610971  826514 command_runner.go:124] > 3ec20f2e
	I0813 00:04:32.611046  826514 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8202892.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 00:04:32.618917  826514 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 00:04:32.626707  826514 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 00:04:32.630915  826514 command_runner.go:124] > -rw-r--r-- 1 root root 1111 Aug 12 23:51 /usr/share/ca-certificates/minikubeCA.pem
	I0813 00:04:32.631085  826514 certs.go:416] hashing: -rw-r--r-- 1 root root 1111 Aug 12 23:51 /usr/share/ca-certificates/minikubeCA.pem
	I0813 00:04:32.631124  826514 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 00:04:32.636434  826514 command_runner.go:124] > b5213941
	I0813 00:04:32.636674  826514 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 00:04:32.644091  826514 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/820289.pem && ln -fs /usr/share/ca-certificates/820289.pem /etc/ssl/certs/820289.pem"
	I0813 00:04:32.651450  826514 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/820289.pem
	I0813 00:04:32.655613  826514 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Aug 12 23:59 /usr/share/ca-certificates/820289.pem
	I0813 00:04:32.655942  826514 certs.go:416] hashing: -rw-r--r-- 1 root root 1338 Aug 12 23:59 /usr/share/ca-certificates/820289.pem
	I0813 00:04:32.655980  826514 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/820289.pem
	I0813 00:04:32.661613  826514 command_runner.go:124] > 51391683
	I0813 00:04:32.662071  826514 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/820289.pem /etc/ssl/certs/51391683.0"
	I0813 00:04:32.669575  826514 kubeadm.go:390] StartCluster: {Name:multinode-20210813000359-820289 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12122/minikube-v1.22.0-1628238775-12122.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:multinode-
20210813000359-820289 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.22 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0}
	I0813 00:04:32.669649  826514 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0813 00:04:32.669681  826514 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 00:04:32.701020  826514 cri.go:76] found id: ""
	I0813 00:04:32.701074  826514 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 00:04:32.708039  826514 command_runner.go:124] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0813 00:04:32.708064  826514 command_runner.go:124] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0813 00:04:32.708094  826514 command_runner.go:124] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0813 00:04:32.708212  826514 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 00:04:32.714812  826514 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 00:04:32.721263  826514 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0813 00:04:32.721280  826514 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0813 00:04:32.721288  826514 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0813 00:04:32.721493  826514 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 00:04:32.721607  826514 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 00:04:32.721661  826514 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0813 00:04:32.865000  826514 command_runner.go:124] > [init] Using Kubernetes version: v1.21.3
	I0813 00:04:32.865108  826514 command_runner.go:124] > [preflight] Running pre-flight checks
	I0813 00:04:33.166852  826514 command_runner.go:124] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0813 00:04:33.166965  826514 command_runner.go:124] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0813 00:04:33.167104  826514 command_runner.go:124] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0813 00:04:33.372502  826514 out.go:204]   - Generating certificates and keys ...
	I0813 00:04:33.370376  826514 command_runner.go:124] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0813 00:04:33.372598  826514 command_runner.go:124] > [certs] Using existing ca certificate authority
	I0813 00:04:33.372695  826514 command_runner.go:124] > [certs] Using existing apiserver certificate and key on disk
	I0813 00:04:33.550935  826514 command_runner.go:124] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0813 00:04:33.821015  826514 command_runner.go:124] > [certs] Generating "front-proxy-ca" certificate and key
	I0813 00:04:34.075284  826514 command_runner.go:124] > [certs] Generating "front-proxy-client" certificate and key
	I0813 00:04:34.267752  826514 command_runner.go:124] > [certs] Generating "etcd/ca" certificate and key
	I0813 00:04:34.577030  826514 command_runner.go:124] > [certs] Generating "etcd/server" certificate and key
	I0813 00:04:34.925803  826514 command_runner.go:124] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-20210813000359-820289] and IPs [192.168.39.22 127.0.0.1 ::1]
	I0813 00:04:34.925891  826514 command_runner.go:124] > [certs] Generating "etcd/peer" certificate and key
	I0813 00:04:34.926093  826514 command_runner.go:124] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-20210813000359-820289] and IPs [192.168.39.22 127.0.0.1 ::1]
	I0813 00:04:34.926174  826514 command_runner.go:124] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0813 00:04:35.101291  826514 command_runner.go:124] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0813 00:04:35.556390  826514 command_runner.go:124] > [certs] Generating "sa" key and public key
	I0813 00:04:35.556493  826514 command_runner.go:124] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0813 00:04:35.706103  826514 command_runner.go:124] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0813 00:04:35.807380  826514 command_runner.go:124] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0813 00:04:35.916494  826514 command_runner.go:124] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0813 00:04:36.231264  826514 command_runner.go:124] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0813 00:04:36.259008  826514 command_runner.go:124] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0813 00:04:36.259149  826514 command_runner.go:124] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0813 00:04:36.259215  826514 command_runner.go:124] > [kubelet-start] Starting the kubelet
	I0813 00:04:36.453900  826514 out.go:204]   - Booting up control plane ...
	I0813 00:04:36.451905  826514 command_runner.go:124] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0813 00:04:36.454016  826514 command_runner.go:124] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0813 00:04:36.464094  826514 command_runner.go:124] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0813 00:04:36.466485  826514 command_runner.go:124] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0813 00:04:36.466564  826514 command_runner.go:124] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0813 00:04:36.473333  826514 command_runner.go:124] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0813 00:04:52.473160  826514 command_runner.go:124] > [apiclient] All control plane components are healthy after 16.004446 seconds
	I0813 00:04:52.473336  826514 command_runner.go:124] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0813 00:04:52.496367  826514 command_runner.go:124] > [kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
	I0813 00:04:53.035422  826514 command_runner.go:124] > [upload-certs] Skipping phase. Please see --upload-certs
	I0813 00:04:53.037726  826514 command_runner.go:124] > [mark-control-plane] Marking the node multinode-20210813000359-820289 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0813 00:04:53.548428  826514 out.go:204]   - Configuring RBAC rules ...
	I0813 00:04:53.546841  826514 command_runner.go:124] > [bootstrap-token] Using token: 2bpigu.aauxs97v3zmdhtlx
	I0813 00:04:53.548591  826514 command_runner.go:124] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0813 00:04:53.558336  826514 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0813 00:04:53.572463  826514 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0813 00:04:53.583375  826514 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0813 00:04:53.606316  826514 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0813 00:04:53.615312  826514 command_runner.go:124] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0813 00:04:53.631160  826514 command_runner.go:124] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0813 00:04:54.020398  826514 command_runner.go:124] > [addons] Applied essential addon: CoreDNS
	I0813 00:04:54.095132  826514 command_runner.go:124] > [addons] Applied essential addon: kube-proxy
	I0813 00:04:54.099089  826514 command_runner.go:124] > Your Kubernetes control-plane has initialized successfully!
	I0813 00:04:54.099199  826514 command_runner.go:124] > To start using your cluster, you need to run the following as a regular user:
	I0813 00:04:54.099237  826514 command_runner.go:124] >   mkdir -p $HOME/.kube
	I0813 00:04:54.099357  826514 command_runner.go:124] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0813 00:04:54.099459  826514 command_runner.go:124] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0813 00:04:54.099574  826514 command_runner.go:124] > Alternatively, if you are the root user, you can run:
	I0813 00:04:54.099639  826514 command_runner.go:124] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0813 00:04:54.099693  826514 command_runner.go:124] > You should now deploy a pod network to the cluster.
	I0813 00:04:54.099783  826514 command_runner.go:124] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0813 00:04:54.099848  826514 command_runner.go:124] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0813 00:04:54.099928  826514 command_runner.go:124] > You can now join any number of control-plane nodes by copying certificate authorities
	I0813 00:04:54.099994  826514 command_runner.go:124] > and service account keys on each node and then running the following as root:
	I0813 00:04:54.100067  826514 command_runner.go:124] >   kubeadm join control-plane.minikube.internal:8443 --token 2bpigu.aauxs97v3zmdhtlx \
	I0813 00:04:54.100157  826514 command_runner.go:124] > 	--discovery-token-ca-cert-hash sha256:a2926d5accd9a2d1096d4e62979978bfd5a94255856b68d015a34969efd36535 \
	I0813 00:04:54.100183  826514 command_runner.go:124] > 	--control-plane 
	I0813 00:04:54.100294  826514 command_runner.go:124] > Then you can join any number of worker nodes by running the following on each as root:
	I0813 00:04:54.100375  826514 command_runner.go:124] > kubeadm join control-plane.minikube.internal:8443 --token 2bpigu.aauxs97v3zmdhtlx \
	I0813 00:04:54.100518  826514 command_runner.go:124] > 	--discovery-token-ca-cert-hash sha256:a2926d5accd9a2d1096d4e62979978bfd5a94255856b68d015a34969efd36535 
	I0813 00:04:54.101340  826514 command_runner.go:124] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0813 00:04:54.101797  826514 cni.go:93] Creating CNI manager for ""
	I0813 00:04:54.101817  826514 cni.go:154] 1 nodes found, recommending kindnet
	I0813 00:04:54.103654  826514 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0813 00:04:54.103770  826514 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0813 00:04:54.113142  826514 command_runner.go:124] >   File: /opt/cni/bin/portmap
	I0813 00:04:54.113166  826514 command_runner.go:124] >   Size: 2853400   	Blocks: 5576       IO Block: 4096   regular file
	I0813 00:04:54.113176  826514 command_runner.go:124] > Device: 10h/16d	Inode: 22646       Links: 1
	I0813 00:04:54.113186  826514 command_runner.go:124] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0813 00:04:54.113194  826514 command_runner.go:124] > Access: 2021-08-13 00:04:13.266164804 +0000
	I0813 00:04:54.113204  826514 command_runner.go:124] > Modify: 2021-08-06 09:23:24.000000000 +0000
	I0813 00:04:54.113212  826514 command_runner.go:124] > Change: 2021-08-13 00:04:09.548164804 +0000
	I0813 00:04:54.113229  826514 command_runner.go:124] >  Birth: -
	I0813 00:04:54.113283  826514 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0813 00:04:54.113297  826514 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0813 00:04:54.145305  826514 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0813 00:04:54.641847  826514 command_runner.go:124] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0813 00:04:54.641885  826514 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0813 00:04:54.641895  826514 command_runner.go:124] > serviceaccount/kindnet created
	I0813 00:04:54.641902  826514 command_runner.go:124] > daemonset.apps/kindnet created
	I0813 00:04:54.641976  826514 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 00:04:54.642097  826514 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:04:54.642118  826514 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=dc1c3ca26e9449ce488a773126b8450402c94a19 minikube.k8s.io/name=multinode-20210813000359-820289 minikube.k8s.io/updated_at=2021_08_13T00_04_54_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:04:54.669213  826514 command_runner.go:124] > -16
	I0813 00:04:54.669288  826514 ops.go:34] apiserver oom_adj: -16
	I0813 00:04:54.792518  826514 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0813 00:04:54.792973  826514 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:04:54.833553  826514 command_runner.go:124] > node/multinode-20210813000359-820289 labeled
	I0813 00:04:54.912960  826514 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:04:55.414157  826514 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:04:55.516079  826514 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:04:55.913637  826514 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:04:56.014242  826514 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:04:56.413696  826514 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:04:56.514073  826514 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:04:56.913654  826514 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:04:57.021844  826514 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:04:57.414343  826514 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:04:57.520853  826514 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:04:57.913796  826514 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:04:58.012866  826514 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:04:58.414564  826514 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:04:58.515899  826514 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:04:58.913609  826514 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:04:59.024558  826514 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:04:59.413652  826514 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:04:59.714571  826514 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:04:59.914021  826514 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:05:00.018402  826514 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:05:00.413536  826514 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:05:00.524937  826514 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:05:00.913812  826514 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:05:01.025033  826514 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:05:01.414269  826514 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:05:01.523883  826514 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:05:01.914430  826514 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:05:02.043133  826514 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:05:02.413664  826514 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:05:02.521870  826514 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:05:02.914539  826514 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:05:03.022988  826514 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:05:03.414311  826514 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:05:03.513325  826514 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:05:03.914133  826514 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:05:04.023796  826514 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:05:04.413681  826514 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:05:04.508698  826514 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:05:04.913739  826514 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:05:05.074859  826514 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:05:05.413518  826514 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:05:05.539828  826514 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:05:05.914163  826514 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:05:06.044113  826514 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:05:06.414297  826514 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:05:06.651694  826514 command_runner.go:124] > NAME      SECRETS   AGE
	I0813 00:05:06.651751  826514 command_runner.go:124] > default   1         0s
	I0813 00:05:06.654454  826514 kubeadm.go:985] duration metric: took 12.012401848s to wait for elevateKubeSystemPrivileges.
	I0813 00:05:06.654484  826514 kubeadm.go:392] StartCluster complete in 33.984914614s
	I0813 00:05:06.654509  826514 settings.go:142] acquiring lock: {Name:mk8798f78c6f0a1d20052a3e99a18e56ee754eb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:05:06.654646  826514 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0813 00:05:06.656045  826514 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig: {Name:mk56dc63045ab5614dcc5cc2eaf1f7d3442c655e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:05:06.656627  826514 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0813 00:05:06.656994  826514 kapi.go:59] client config for multinode-20210813000359-820289: &rest.Config{Host:"https://192.168.39.22:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359
-820289/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 00:05:06.657620  826514 cert_rotation.go:137] Starting client certificate rotation controller
	I0813 00:05:06.659270  826514 round_trippers.go:432] GET https://192.168.39.22:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0813 00:05:06.659291  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:06.659298  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:06.659303  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:06.671634  826514 round_trippers.go:457] Response Status: 200 OK in 12 milliseconds
	I0813 00:05:06.671659  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:06.671666  826514 round_trippers.go:463]     Content-Length: 291
	I0813 00:05:06.671671  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:06 GMT
	I0813 00:05:06.671675  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:06.671679  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:06.671684  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:06.671689  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:06.672497  826514 request.go:1123] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"1e1de93b-f119-4736-9f21-896671cf8b78","resourceVersion":"415","creationTimestamp":"2021-08-13T00:04:53Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0813 00:05:06.673363  826514 request.go:1123] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"1e1de93b-f119-4736-9f21-896671cf8b78","resourceVersion":"415","creationTimestamp":"2021-08-13T00:04:53Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0813 00:05:06.673436  826514 round_trippers.go:432] PUT https://192.168.39.22:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0813 00:05:06.673450  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:06.673457  826514 round_trippers.go:442]     Content-Type: application/json
	I0813 00:05:06.673463  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:06.673470  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:06.698684  826514 round_trippers.go:457] Response Status: 200 OK in 25 milliseconds
	I0813 00:05:06.698702  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:06.698708  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:06.698713  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:06.698720  826514 round_trippers.go:463]     Content-Length: 291
	I0813 00:05:06.698727  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:06 GMT
	I0813 00:05:06.698738  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:06.698742  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:06.698763  826514 request.go:1123] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"1e1de93b-f119-4736-9f21-896671cf8b78","resourceVersion":"419","creationTimestamp":"2021-08-13T00:04:53Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0813 00:05:07.199474  826514 round_trippers.go:432] GET https://192.168.39.22:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0813 00:05:07.199504  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:07.199510  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:07.199515  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:07.201870  826514 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:05:07.201894  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:07.201901  826514 round_trippers.go:463]     Content-Length: 291
	I0813 00:05:07.201906  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:07 GMT
	I0813 00:05:07.201913  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:07.201918  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:07.201922  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:07.201927  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:07.201956  826514 request.go:1123] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"1e1de93b-f119-4736-9f21-896671cf8b78","resourceVersion":"448","creationTimestamp":"2021-08-13T00:04:53Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0813 00:05:07.202081  826514 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "multinode-20210813000359-820289" rescaled to 1
	I0813 00:05:07.202138  826514 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.39.22 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 00:05:07.204288  826514 out.go:177] * Verifying Kubernetes components...
	I0813 00:05:07.202214  826514 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 00:05:07.202239  826514 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0813 00:05:07.204366  826514 addons.go:59] Setting storage-provisioner=true in profile "multinode-20210813000359-820289"
	I0813 00:05:07.204388  826514 addons.go:59] Setting default-storageclass=true in profile "multinode-20210813000359-820289"
	I0813 00:05:07.204404  826514 addons.go:135] Setting addon storage-provisioner=true in "multinode-20210813000359-820289"
	I0813 00:05:07.204411  826514 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-20210813000359-820289"
	W0813 00:05:07.204415  826514 addons.go:147] addon storage-provisioner should already be in state true
	I0813 00:05:07.204447  826514 host.go:66] Checking if "multinode-20210813000359-820289" exists ...
	I0813 00:05:07.204370  826514 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 00:05:07.204932  826514 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 00:05:07.204943  826514 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 00:05:07.204976  826514 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 00:05:07.205070  826514 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 00:05:07.216734  826514 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:32819
	I0813 00:05:07.217267  826514 main.go:130] libmachine: () Calling .GetVersion
	I0813 00:05:07.217862  826514 main.go:130] libmachine: Using API Version  1
	I0813 00:05:07.217890  826514 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 00:05:07.218272  826514 main.go:130] libmachine: () Calling .GetMachineName
	I0813 00:05:07.218485  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetState
	I0813 00:05:07.220513  826514 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45837
	I0813 00:05:07.220904  826514 main.go:130] libmachine: () Calling .GetVersion
	I0813 00:05:07.221359  826514 main.go:130] libmachine: Using API Version  1
	I0813 00:05:07.221386  826514 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 00:05:07.221754  826514 main.go:130] libmachine: () Calling .GetMachineName
	I0813 00:05:07.222356  826514 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 00:05:07.222406  826514 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 00:05:07.222638  826514 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0813 00:05:07.223015  826514 kapi.go:59] client config for multinode-20210813000359-820289: &rest.Config{Host:"https://192.168.39.22:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359
-820289/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 00:05:07.224934  826514 round_trippers.go:432] GET https://192.168.39.22:8443/apis/storage.k8s.io/v1/storageclasses
	I0813 00:05:07.224958  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:07.224967  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:07.224973  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:07.229856  826514 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0813 00:05:07.229876  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:07.229882  826514 round_trippers.go:463]     Content-Length: 109
	I0813 00:05:07.229888  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:07 GMT
	I0813 00:05:07.229893  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:07.229898  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:07.229912  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:07.229917  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:07.229938  826514 request.go:1123] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"448"},"items":[]}
	I0813 00:05:07.230757  826514 addons.go:135] Setting addon default-storageclass=true in "multinode-20210813000359-820289"
	W0813 00:05:07.230783  826514 addons.go:147] addon default-storageclass should already be in state true
	I0813 00:05:07.230814  826514 host.go:66] Checking if "multinode-20210813000359-820289" exists ...
	I0813 00:05:07.231218  826514 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 00:05:07.231262  826514 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 00:05:07.234001  826514 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34377
	I0813 00:05:07.234429  826514 main.go:130] libmachine: () Calling .GetVersion
	I0813 00:05:07.234893  826514 main.go:130] libmachine: Using API Version  1
	I0813 00:05:07.234919  826514 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 00:05:07.235330  826514 main.go:130] libmachine: () Calling .GetMachineName
	I0813 00:05:07.235548  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetState
	I0813 00:05:07.238762  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .DriverName
	I0813 00:05:07.240913  826514 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 00:05:07.241036  826514 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 00:05:07.241054  826514 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 00:05:07.241073  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHHostname
	I0813 00:05:07.243327  826514 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43461
	I0813 00:05:07.243776  826514 main.go:130] libmachine: () Calling .GetVersion
	I0813 00:05:07.244257  826514 main.go:130] libmachine: Using API Version  1
	I0813 00:05:07.244286  826514 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 00:05:07.244667  826514 main.go:130] libmachine: () Calling .GetMachineName
	I0813 00:05:07.245206  826514 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 00:05:07.245261  826514 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 00:05:07.246919  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:05:07.247418  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e4:55", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:04:13 +0000 UTC Type:0 Mac:52:54:00:b5:e4:55 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:multinode-20210813000359-820289 Clientid:01:52:54:00:b5:e4:55}
	I0813 00:05:07.247457  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined IP address 192.168.39.22 and MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:05:07.247589  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHPort
	I0813 00:05:07.247794  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHKeyPath
	I0813 00:05:07.247971  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHUsername
	I0813 00:05:07.248118  826514 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813000359-820289/id_rsa Username:docker}
	I0813 00:05:07.256451  826514 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:38471
	I0813 00:05:07.256858  826514 main.go:130] libmachine: () Calling .GetVersion
	I0813 00:05:07.257396  826514 main.go:130] libmachine: Using API Version  1
	I0813 00:05:07.257419  826514 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 00:05:07.257799  826514 main.go:130] libmachine: () Calling .GetMachineName
	I0813 00:05:07.257985  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetState
	I0813 00:05:07.261030  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .DriverName
	I0813 00:05:07.261242  826514 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 00:05:07.261256  826514 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 00:05:07.261270  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHHostname
	I0813 00:05:07.266322  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:05:07.266735  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e4:55", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:04:13 +0000 UTC Type:0 Mac:52:54:00:b5:e4:55 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:multinode-20210813000359-820289 Clientid:01:52:54:00:b5:e4:55}
	I0813 00:05:07.266760  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined IP address 192.168.39.22 and MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:05:07.266929  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHPort
	I0813 00:05:07.267109  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHKeyPath
	I0813 00:05:07.267250  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHUsername
	I0813 00:05:07.267408  826514 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813000359-820289/id_rsa Username:docker}
	I0813 00:05:07.437584  826514 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 00:05:07.535996  826514 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 00:05:07.538887  826514 command_runner.go:124] > apiVersion: v1
	I0813 00:05:07.538907  826514 command_runner.go:124] > data:
	I0813 00:05:07.538913  826514 command_runner.go:124] >   Corefile: |
	I0813 00:05:07.538919  826514 command_runner.go:124] >     .:53 {
	I0813 00:05:07.538926  826514 command_runner.go:124] >         errors
	I0813 00:05:07.538933  826514 command_runner.go:124] >         health {
	I0813 00:05:07.538950  826514 command_runner.go:124] >            lameduck 5s
	I0813 00:05:07.538955  826514 command_runner.go:124] >         }
	I0813 00:05:07.538964  826514 command_runner.go:124] >         ready
	I0813 00:05:07.538974  826514 command_runner.go:124] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0813 00:05:07.538986  826514 command_runner.go:124] >            pods insecure
	I0813 00:05:07.538995  826514 command_runner.go:124] >            fallthrough in-addr.arpa ip6.arpa
	I0813 00:05:07.539002  826514 command_runner.go:124] >            ttl 30
	I0813 00:05:07.539008  826514 command_runner.go:124] >         }
	I0813 00:05:07.539015  826514 command_runner.go:124] >         prometheus :9153
	I0813 00:05:07.539022  826514 command_runner.go:124] >         forward . /etc/resolv.conf {
	I0813 00:05:07.539031  826514 command_runner.go:124] >            max_concurrent 1000
	I0813 00:05:07.539036  826514 command_runner.go:124] >         }
	I0813 00:05:07.539044  826514 command_runner.go:124] >         cache 30
	I0813 00:05:07.539049  826514 command_runner.go:124] >         loop
	I0813 00:05:07.539056  826514 command_runner.go:124] >         reload
	I0813 00:05:07.539061  826514 command_runner.go:124] >         loadbalance
	I0813 00:05:07.539066  826514 command_runner.go:124] >     }
	I0813 00:05:07.539070  826514 command_runner.go:124] > kind: ConfigMap
	I0813 00:05:07.539080  826514 command_runner.go:124] > metadata:
	I0813 00:05:07.539125  826514 command_runner.go:124] >   creationTimestamp: "2021-08-13T00:04:53Z"
	I0813 00:05:07.539135  826514 command_runner.go:124] >   name: coredns
	I0813 00:05:07.539139  826514 command_runner.go:124] >   namespace: kube-system
	I0813 00:05:07.539143  826514 command_runner.go:124] >   resourceVersion: "272"
	I0813 00:05:07.539149  826514 command_runner.go:124] >   uid: df8ecb27-57a8-4d1a-9ddc-10804cd545c7
	I0813 00:05:07.539292  826514 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 00:05:07.539675  826514 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0813 00:05:07.540039  826514 kapi.go:59] client config for multinode-20210813000359-820289: &rest.Config{Host:"https://192.168.39.22:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359
-820289/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 00:05:07.542132  826514 node_ready.go:35] waiting up to 6m0s for node "multinode-20210813000359-820289" to be "Ready" ...
	I0813 00:05:07.542261  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289
	I0813 00:05:07.542280  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:07.542288  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:07.542295  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:07.544528  826514 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:05:07.544549  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:07.544555  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:07.544561  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:07.544565  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:07.544570  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:07.544575  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:07 GMT
	I0813 00:05:07.544776  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","resourceVersion":"378","creationTimestamp":"2021-08-13T00:04:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813000359-820289","minikube.k8s.io/updated_at":"2021_08_13T00_04_54_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6557 chars]
	I0813 00:05:07.546601  826514 node_ready.go:49] node "multinode-20210813000359-820289" has status "Ready":"True"
	I0813 00:05:07.546631  826514 node_ready.go:38] duration metric: took 4.467398ms waiting for node "multinode-20210813000359-820289" to be "Ready" ...
	I0813 00:05:07.546646  826514 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 00:05:07.546760  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods
	I0813 00:05:07.546778  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:07.546788  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:07.546814  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:07.552626  826514 round_trippers.go:457] Response Status: 200 OK in 5 milliseconds
	I0813 00:05:07.552646  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:07.552653  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:07.552658  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:07.552668  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:07.552673  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:07.552678  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:07 GMT
	I0813 00:05:07.553376  826514 request.go:1123] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"coredns-558bd4d5db-rgwt6","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"5bd6a56d-1eb3-4e0f-b537-15d0b4b176f9","resourceVersion":"443","creationTimestamp":"2021-08-13T00:05:06Z","deletionTimestamp":"2021-08-13T00:05:36Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"65fbbea7-e9b5-486b-a329-69ff09468fc0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65fbbea7-e9b5-486b-a329-69ff0946
8fc0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:control [truncated 51943 chars]
	I0813 00:05:07.562062  826514 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-rgwt6" in "kube-system" namespace to be "Ready" ...
	I0813 00:05:07.562147  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-rgwt6
	I0813 00:05:07.562161  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:07.562170  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:07.562180  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:07.567427  826514 round_trippers.go:457] Response Status: 200 OK in 5 milliseconds
	I0813 00:05:07.567441  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:07.567445  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:07.567448  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:07.567451  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:07.567456  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:07.567460  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:07 GMT
	I0813 00:05:07.568485  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-rgwt6","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"5bd6a56d-1eb3-4e0f-b537-15d0b4b176f9","resourceVersion":"443","creationTimestamp":"2021-08-13T00:05:06Z","deletionTimestamp":"2021-08-13T00:05:36Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"65fbbea7-e9b5-486b-a329-69ff09468fc0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65fbbea7-e9b5-486b-a329-69ff09468fc0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDel
etion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:sp [truncated 4344 chars]
	I0813 00:05:07.571691  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289
	I0813 00:05:07.571734  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:07.571747  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:07.571753  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:07.574113  826514 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:05:07.574132  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:07.574139  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:07.574146  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:07.574157  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:07.574162  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:07.574167  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:07 GMT
	I0813 00:05:07.574401  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","resourceVersion":"378","creationTimestamp":"2021-08-13T00:04:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813000359-820289","minikube.k8s.io/updated_at":"2021_08_13T00_04_54_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6557 chars]
	I0813 00:05:08.075586  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-rgwt6
	I0813 00:05:08.075625  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:08.075633  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:08.075638  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:08.082676  826514 round_trippers.go:457] Response Status: 200 OK in 7 milliseconds
	I0813 00:05:08.082696  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:08.082700  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:08.082703  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:08.082712  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:08.082717  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:08.082721  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:08 GMT
	I0813 00:05:08.083793  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-rgwt6","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"5bd6a56d-1eb3-4e0f-b537-15d0b4b176f9","resourceVersion":"443","creationTimestamp":"2021-08-13T00:05:06Z","deletionTimestamp":"2021-08-13T00:05:36Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"65fbbea7-e9b5-486b-a329-69ff09468fc0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65fbbea7-e9b5-486b-a329-69ff09468fc0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDel
etion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:sp [truncated 4344 chars]
	I0813 00:05:08.084098  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289
	I0813 00:05:08.084112  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:08.084117  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:08.084121  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:08.090615  826514 round_trippers.go:457] Response Status: 200 OK in 6 milliseconds
	I0813 00:05:08.090636  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:08.090643  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:08.090648  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:08.090653  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:08 GMT
	I0813 00:05:08.090658  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:08.090662  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:08.091007  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","resourceVersion":"378","creationTimestamp":"2021-08-13T00:04:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813000359-820289","minikube.k8s.io/updated_at":"2021_08_13T00_04_54_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6557 chars]
	I0813 00:05:08.575831  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-rgwt6
	I0813 00:05:08.575865  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:08.575874  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:08.575880  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:08.578832  826514 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:05:08.578852  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:08.578859  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:08.578863  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:08.578867  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:08.578875  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:08 GMT
	I0813 00:05:08.578879  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:08.578965  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-rgwt6","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"5bd6a56d-1eb3-4e0f-b537-15d0b4b176f9","resourceVersion":"443","creationTimestamp":"2021-08-13T00:05:06Z","deletionTimestamp":"2021-08-13T00:05:36Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"65fbbea7-e9b5-486b-a329-69ff09468fc0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65fbbea7-e9b5-486b-a329-69ff09468fc0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDel
etion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:sp [truncated 4344 chars]
	I0813 00:05:08.579265  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289
	I0813 00:05:08.579282  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:08.579289  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:08.579295  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:08.582567  826514 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 00:05:08.582587  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:08.582592  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:08.582595  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:08.582598  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:08.582602  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:08.582605  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:08 GMT
	I0813 00:05:08.582814  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","resourceVersion":"378","creationTimestamp":"2021-08-13T00:04:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813000359-820289","minikube.k8s.io/updated_at":"2021_08_13T00_04_54_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6557 chars]
	I0813 00:05:09.015561  826514 command_runner.go:124] > serviceaccount/storage-provisioner created
	I0813 00:05:09.036749  826514 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0813 00:05:09.055361  826514 command_runner.go:124] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0813 00:05:09.073972  826514 command_runner.go:124] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0813 00:05:09.075082  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-rgwt6
	I0813 00:05:09.075098  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:09.075103  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:09.075108  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:09.078011  826514 round_trippers.go:457] Response Status: 404 Not Found in 2 milliseconds
	I0813 00:05:09.078032  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:09.078039  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:09 GMT
	I0813 00:05:09.078045  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:09.078050  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:09.078058  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:09.078063  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:09.078068  826514 round_trippers.go:463]     Content-Length: 216
	I0813 00:05:09.078092  826514 request.go:1123] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods \"coredns-558bd4d5db-rgwt6\" not found","reason":"NotFound","details":{"name":"coredns-558bd4d5db-rgwt6","kind":"pods"},"code":404}
	I0813 00:05:09.078826  826514 pod_ready.go:97] error getting pod "coredns-558bd4d5db-rgwt6" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-rgwt6" not found
	I0813 00:05:09.078861  826514 pod_ready.go:81] duration metric: took 1.516767371s waiting for pod "coredns-558bd4d5db-rgwt6" in "kube-system" namespace to be "Ready" ...
	E0813 00:05:09.078878  826514 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-558bd4d5db-rgwt6" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-rgwt6" not found
	I0813 00:05:09.078887  826514 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-sstrb" in "kube-system" namespace to be "Ready" ...
	I0813 00:05:09.078951  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-sstrb
	I0813 00:05:09.078962  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:09.078968  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:09.078974  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:09.085109  826514 command_runner.go:124] > endpoints/k8s.io-minikube-hostpath created
	I0813 00:05:09.087697  826514 round_trippers.go:457] Response Status: 200 OK in 8 milliseconds
	I0813 00:05:09.087725  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:09.087731  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:09.087736  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:09.087743  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:09.087752  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:09.087764  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:09 GMT
	I0813 00:05:09.087942  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-sstrb","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"16f6c77d-26a2-47e7-9c19-74736961cc13","resourceVersion":"455","creationTimestamp":"2021-08-13T00:05:06Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"65fbbea7-e9b5-486b-a329-69ff09468fc0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65fbbea7-e9b5-486b-a329-69ff09468fc0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5629 chars]
	I0813 00:05:09.088353  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289
	I0813 00:05:09.088379  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:09.088386  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:09.088392  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:09.091901  826514 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 00:05:09.091918  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:09.091924  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:09.091929  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:09 GMT
	I0813 00:05:09.091933  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:09.091938  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:09.091941  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:09.092323  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","resourceVersion":"378","creationTimestamp":"2021-08-13T00:04:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813000359-820289","minikube.k8s.io/updated_at":"2021_08_13T00_04_54_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6557 chars]
	I0813 00:05:09.120148  826514 command_runner.go:124] > pod/storage-provisioner created
	I0813 00:05:09.126027  826514 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.688404152s)
	I0813 00:05:09.126089  826514 main.go:130] libmachine: Making call to close driver server
	I0813 00:05:09.126106  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .Close
	I0813 00:05:09.126379  826514 main.go:130] libmachine: Successfully made call to close driver server
	I0813 00:05:09.126398  826514 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 00:05:09.126414  826514 main.go:130] libmachine: Making call to close driver server
	I0813 00:05:09.126429  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .Close
	I0813 00:05:09.126428  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | Closing plugin on server side
	I0813 00:05:09.126659  826514 main.go:130] libmachine: Successfully made call to close driver server
	I0813 00:05:09.126678  826514 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 00:05:09.174388  826514 command_runner.go:124] > storageclass.storage.k8s.io/standard created
	I0813 00:05:09.178650  826514 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.642611617s)
	I0813 00:05:09.178693  826514 main.go:130] libmachine: Making call to close driver server
	I0813 00:05:09.178705  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .Close
	I0813 00:05:09.178957  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | Closing plugin on server side
	I0813 00:05:09.178984  826514 main.go:130] libmachine: Successfully made call to close driver server
	I0813 00:05:09.179017  826514 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 00:05:09.179025  826514 main.go:130] libmachine: Making call to close driver server
	I0813 00:05:09.179034  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .Close
	I0813 00:05:09.179300  826514 main.go:130] libmachine: Successfully made call to close driver server
	I0813 00:05:09.179324  826514 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 00:05:09.179324  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | Closing plugin on server side
	I0813 00:05:09.179339  826514 main.go:130] libmachine: Making call to close driver server
	I0813 00:05:09.179355  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .Close
	I0813 00:05:09.179599  826514 main.go:130] libmachine: Successfully made call to close driver server
	I0813 00:05:09.179613  826514 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 00:05:09.181826  826514 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0813 00:05:09.181846  826514 addons.go:344] enableAddons completed in 1.979615938s
	I0813 00:05:09.212788  826514 command_runner.go:124] > configmap/coredns replaced
	I0813 00:05:09.212832  826514 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.673513134s)
	I0813 00:05:09.212853  826514 start.go:736] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS
	I0813 00:05:09.593598  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-sstrb
	I0813 00:05:09.593632  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:09.593641  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:09.593656  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:09.597341  826514 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 00:05:09.597364  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:09.597371  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:09.597375  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:09.597380  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:09.597384  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:09.597388  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:09 GMT
	I0813 00:05:09.597885  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-sstrb","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"16f6c77d-26a2-47e7-9c19-74736961cc13","resourceVersion":"455","creationTimestamp":"2021-08-13T00:05:06Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"65fbbea7-e9b5-486b-a329-69ff09468fc0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65fbbea7-e9b5-486b-a329-69ff09468fc0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5629 chars]
	I0813 00:05:09.598307  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289
	I0813 00:05:09.598363  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:09.598370  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:09.598375  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:09.601146  826514 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:05:09.601171  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:09.601178  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:09.601183  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:09.601188  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:09.601192  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:09.601197  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:09 GMT
	I0813 00:05:09.602228  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","resourceVersion":"378","creationTimestamp":"2021-08-13T00:04:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813000359-820289","minikube.k8s.io/updated_at":"2021_08_13T00_04_54_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6557 chars]
	I0813 00:05:10.093298  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-sstrb
	I0813 00:05:10.093332  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:10.093340  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:10.093347  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:10.096086  826514 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:05:10.096107  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:10.096114  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:10.096118  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:10.096123  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:10.096127  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:10.096131  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:10 GMT
	I0813 00:05:10.096404  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-sstrb","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"16f6c77d-26a2-47e7-9c19-74736961cc13","resourceVersion":"455","creationTimestamp":"2021-08-13T00:05:06Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"65fbbea7-e9b5-486b-a329-69ff09468fc0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65fbbea7-e9b5-486b-a329-69ff09468fc0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5629 chars]
	I0813 00:05:10.096747  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289
	I0813 00:05:10.096760  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:10.096765  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:10.096769  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:10.098955  826514 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:05:10.098993  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:10.098999  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:10.099003  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:10.099007  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:10.099012  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:10.099016  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:10 GMT
	I0813 00:05:10.099217  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","resourceVersion":"378","creationTimestamp":"2021-08-13T00:04:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813000359-820289","minikube.k8s.io/updated_at":"2021_08_13T00_04_54_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6557 chars]
	I0813 00:05:10.592897  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-sstrb
	I0813 00:05:10.592928  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:10.592936  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:10.592942  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:10.596527  826514 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 00:05:10.596545  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:10.596550  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:10.596553  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:10.596556  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:10.596559  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:10 GMT
	I0813 00:05:10.596567  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:10.596758  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-sstrb","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"16f6c77d-26a2-47e7-9c19-74736961cc13","resourceVersion":"480","creationTimestamp":"2021-08-13T00:05:06Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"65fbbea7-e9b5-486b-a329-69ff09468fc0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65fbbea7-e9b5-486b-a329-69ff09468fc0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5734 chars]
	I0813 00:05:10.597112  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289
	I0813 00:05:10.597128  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:10.597135  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:10.597141  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:10.598973  826514 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:05:10.598982  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:10.598986  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:10.598989  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:10.598992  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:10.598995  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:10.598997  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:10 GMT
	I0813 00:05:10.599352  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","resourceVersion":"378","creationTimestamp":"2021-08-13T00:04:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813000359-820289","minikube.k8s.io/updated_at":"2021_08_13T00_04_54_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6557 chars]
	I0813 00:05:10.599734  826514 pod_ready.go:92] pod "coredns-558bd4d5db-sstrb" in "kube-system" namespace has status "Ready":"True"
	I0813 00:05:10.599762  826514 pod_ready.go:81] duration metric: took 1.520861701s waiting for pod "coredns-558bd4d5db-sstrb" in "kube-system" namespace to be "Ready" ...
	I0813 00:05:10.599776  826514 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-20210813000359-820289" in "kube-system" namespace to be "Ready" ...
	I0813 00:05:10.599844  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210813000359-820289
	I0813 00:05:10.599856  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:10.599863  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:10.599868  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:10.602087  826514 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:05:10.602101  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:10.602106  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:10.602111  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:10.602115  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:10.602119  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:10.602123  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:10 GMT
	I0813 00:05:10.602319  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210813000359-820289","namespace":"kube-system","uid":"2d8ff24a-3267-4d8b-a528-3da3d3b70e54","resourceVersion":"330","creationTimestamp":"2021-08-13T00:04:59Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.22:2379","kubernetes.io/config.hash":"bc4647bddd439c8d0983a3b358a72513","kubernetes.io/config.mirror":"bc4647bddd439c8d0983a3b358a72513","kubernetes.io/config.seen":"2021-08-13T00:04:59.185501301Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.
kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.ha [truncated 5574 chars]
	I0813 00:05:10.602638  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289
	I0813 00:05:10.602651  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:10.602657  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:10.602663  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:10.605574  826514 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:05:10.605591  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:10.605597  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:10.605602  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:10.605606  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:10.605610  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:10.605614  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:10 GMT
	I0813 00:05:10.605774  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","resourceVersion":"378","creationTimestamp":"2021-08-13T00:04:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813000359-820289","minikube.k8s.io/updated_at":"2021_08_13T00_04_54_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6557 chars]
	I0813 00:05:10.605980  826514 pod_ready.go:92] pod "etcd-multinode-20210813000359-820289" in "kube-system" namespace has status "Ready":"True"
	I0813 00:05:10.605991  826514 pod_ready.go:81] duration metric: took 6.206986ms waiting for pod "etcd-multinode-20210813000359-820289" in "kube-system" namespace to be "Ready" ...
	I0813 00:05:10.606002  826514 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-20210813000359-820289" in "kube-system" namespace to be "Ready" ...
	I0813 00:05:10.606042  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20210813000359-820289
	I0813 00:05:10.606050  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:10.606055  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:10.606059  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:10.609521  826514 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 00:05:10.609534  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:10.609539  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:10.609546  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:10 GMT
	I0813 00:05:10.609551  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:10.609556  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:10.609561  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:10.610508  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20210813000359-820289","namespace":"kube-system","uid":"b5954b4a-9e51-488b-a0fa-cacb7de86621","resourceVersion":"450","creationTimestamp":"2021-08-13T00:04:59Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.22:8443","kubernetes.io/config.hash":"e0dcc263218298eb0bc9dd91ad6c2c6d","kubernetes.io/config.mirror":"e0dcc263218298eb0bc9dd91ad6c2c6d","kubernetes.io/config.seen":"2021-08-13T00:04:59.185603315Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-addre [truncated 7252 chars]
	I0813 00:05:10.610772  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289
	I0813 00:05:10.610787  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:10.610793  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:10.610798  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:10.613230  826514 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:05:10.613242  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:10.613246  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:10.613250  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:10.613253  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:10 GMT
	I0813 00:05:10.613255  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:10.613258  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:10.613492  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","resourceVersion":"378","creationTimestamp":"2021-08-13T00:04:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813000359-820289","minikube.k8s.io/updated_at":"2021_08_13T00_04_54_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6557 chars]
	I0813 00:05:10.613687  826514 pod_ready.go:92] pod "kube-apiserver-multinode-20210813000359-820289" in "kube-system" namespace has status "Ready":"True"
	I0813 00:05:10.613699  826514 pod_ready.go:81] duration metric: took 7.690934ms waiting for pod "kube-apiserver-multinode-20210813000359-820289" in "kube-system" namespace to be "Ready" ...
	I0813 00:05:10.613707  826514 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-20210813000359-820289" in "kube-system" namespace to be "Ready" ...
	I0813 00:05:10.613750  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210813000359-820289
	I0813 00:05:10.613758  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:10.613762  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:10.613765  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:10.622909  826514 round_trippers.go:457] Response Status: 200 OK in 9 milliseconds
	I0813 00:05:10.622925  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:10.622931  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:10.622936  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:10.622941  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:10.622946  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:10.622951  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:10 GMT
	I0813 00:05:10.623955  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20210813000359-820289","namespace":"kube-system","uid":"f25a529b-df04-44a7-aa11-5f04f8acaaf9","resourceVersion":"452","creationTimestamp":"2021-08-13T00:04:53Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8e7d0bc335d72432dc6bd22d4541dfbd","kubernetes.io/config.mirror":"8e7d0bc335d72432dc6bd22d4541dfbd","kubernetes.io/config.seen":"2021-08-13T00:04:42.246742645Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con
fig.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config [truncated 6813 chars]
	I0813 00:05:10.624225  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289
	I0813 00:05:10.624236  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:10.624241  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:10.624245  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:10.639185  826514 round_trippers.go:457] Response Status: 200 OK in 14 milliseconds
	I0813 00:05:10.639202  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:10.639208  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:10.639212  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:10.639217  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:10 GMT
	I0813 00:05:10.639221  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:10.639223  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:10.639899  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","resourceVersion":"378","creationTimestamp":"2021-08-13T00:04:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813000359-820289","minikube.k8s.io/updated_at":"2021_08_13T00_04_54_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6557 chars]
	I0813 00:05:10.640205  826514 pod_ready.go:92] pod "kube-controller-manager-multinode-20210813000359-820289" in "kube-system" namespace has status "Ready":"True"
	I0813 00:05:10.640220  826514 pod_ready.go:81] duration metric: took 26.505997ms waiting for pod "kube-controller-manager-multinode-20210813000359-820289" in "kube-system" namespace to be "Ready" ...
	I0813 00:05:10.640231  826514 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tvtvh" in "kube-system" namespace to be "Ready" ...
	I0813 00:05:10.640279  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tvtvh
	I0813 00:05:10.640289  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:10.640296  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:10.640302  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:10.642423  826514 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:05:10.642434  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:10.642440  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:10 GMT
	I0813 00:05:10.642445  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:10.642450  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:10.642454  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:10.642459  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:10.642597  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tvtvh","generateName":"kube-proxy-","namespace":"kube-system","uid":"7b54ef6d-43f6-4dc1-b3be-c5fb1b57a108","resourceVersion":"476","creationTimestamp":"2021-08-13T00:05:06Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e61f546c-a1b1-4412-af08-7c2ebe78d772","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e61f546c-a1b1-4412-af08-7c2ebe78d772\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 5760 chars]
	I0813 00:05:10.642950  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289
	I0813 00:05:10.642966  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:10.642973  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:10.642979  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:10.647579  826514 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0813 00:05:10.647591  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:10.647595  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:10.647598  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:10.647601  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:10.647604  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:10 GMT
	I0813 00:05:10.647607  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:10.648091  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","resourceVersion":"378","creationTimestamp":"2021-08-13T00:04:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813000359-820289","minikube.k8s.io/updated_at":"2021_08_13T00_04_54_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6557 chars]
	I0813 00:05:10.648364  826514 pod_ready.go:92] pod "kube-proxy-tvtvh" in "kube-system" namespace has status "Ready":"True"
	I0813 00:05:10.648382  826514 pod_ready.go:81] duration metric: took 8.142999ms waiting for pod "kube-proxy-tvtvh" in "kube-system" namespace to be "Ready" ...
	I0813 00:05:10.648392  826514 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-20210813000359-820289" in "kube-system" namespace to be "Ready" ...
	I0813 00:05:10.793758  826514 request.go:600] Waited for 145.306191ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210813000359-820289
	I0813 00:05:10.793828  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210813000359-820289
	I0813 00:05:10.793837  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:10.793842  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:10.793852  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:10.796203  826514 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:05:10.796224  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:10.796231  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:10.796235  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:10 GMT
	I0813 00:05:10.796240  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:10.796244  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:10.796248  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:10.796599  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20210813000359-820289","namespace":"kube-system","uid":"f92e79ae-a806-4356-8c4f-e58f5355dac5","resourceVersion":"328","creationTimestamp":"2021-08-13T00:04:59Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"113dba97bad3e83d4c789adae2059392","kubernetes.io/config.mirror":"113dba97bad3e83d4c789adae2059392","kubernetes.io/config.seen":"2021-08-13T00:04:59.185608489Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:
kubernetes.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:la [truncated 4543 chars]
	I0813 00:05:10.993333  826514 request.go:600] Waited for 196.357622ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289
	I0813 00:05:10.993408  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289
	I0813 00:05:10.993417  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:10.993423  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:10.993427  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:10.996459  826514 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 00:05:10.996490  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:10.996497  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:10.996502  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:10.996510  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:10.996514  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:10 GMT
	I0813 00:05:10.996519  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:10.996913  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","resourceVersion":"378","creationTimestamp":"2021-08-13T00:04:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813000359-820289","minikube.k8s.io/updated_at":"2021_08_13T00_04_54_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6557 chars]
	I0813 00:05:10.997167  826514 pod_ready.go:92] pod "kube-scheduler-multinode-20210813000359-820289" in "kube-system" namespace has status "Ready":"True"
	I0813 00:05:10.997177  826514 pod_ready.go:81] duration metric: took 348.777284ms waiting for pod "kube-scheduler-multinode-20210813000359-820289" in "kube-system" namespace to be "Ready" ...
	I0813 00:05:10.997185  826514 pod_ready.go:38] duration metric: took 3.450511964s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 00:05:10.997211  826514 api_server.go:50] waiting for apiserver process to appear ...
	I0813 00:05:10.997260  826514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 00:05:11.007956  826514 command_runner.go:124] > 2627
	I0813 00:05:11.008393  826514 api_server.go:70] duration metric: took 3.806217955s to wait for apiserver process to appear ...
	I0813 00:05:11.008410  826514 api_server.go:86] waiting for apiserver healthz status ...
	I0813 00:05:11.008423  826514 api_server.go:239] Checking apiserver healthz at https://192.168.39.22:8443/healthz ...
	I0813 00:05:11.014574  826514 api_server.go:265] https://192.168.39.22:8443/healthz returned 200:
	ok
	I0813 00:05:11.014650  826514 round_trippers.go:432] GET https://192.168.39.22:8443/version?timeout=32s
	I0813 00:05:11.014661  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:11.014668  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:11.014675  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:11.015934  826514 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:05:11.015948  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:11.015952  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:11.015955  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:11.015958  826514 round_trippers.go:463]     Content-Length: 263
	I0813 00:05:11.015961  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:11 GMT
	I0813 00:05:11.015964  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:11.015967  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:11.015984  826514 request.go:1123] Response Body: {
	  "major": "1",
	  "minor": "21",
	  "gitVersion": "v1.21.3",
	  "gitCommit": "ca643a4d1f7bfe34773c74f79527be4afd95bf39",
	  "gitTreeState": "clean",
	  "buildDate": "2021-07-15T20:59:07Z",
	  "goVersion": "go1.16.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0813 00:05:11.016067  826514 api_server.go:139] control plane version: v1.21.3
	I0813 00:05:11.016081  826514 api_server.go:129] duration metric: took 7.664909ms to wait for apiserver health ...
	I0813 00:05:11.016089  826514 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 00:05:11.193415  826514 request.go:600] Waited for 177.232272ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods
	I0813 00:05:11.193474  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods
	I0813 00:05:11.193479  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:11.193484  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:11.193489  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:11.198553  826514 round_trippers.go:457] Response Status: 200 OK in 5 milliseconds
	I0813 00:05:11.198573  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:11.198578  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:11.198583  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:11.198588  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:11 GMT
	I0813 00:05:11.198591  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:11.198595  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:11.199096  826514 request.go:1123] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"493"},"items":[{"metadata":{"name":"coredns-558bd4d5db-sstrb","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"16f6c77d-26a2-47e7-9c19-74736961cc13","resourceVersion":"480","creationTimestamp":"2021-08-13T00:05:06Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"65fbbea7-e9b5-486b-a329-69ff09468fc0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65fbbea7-e9b5-486b-a329-69ff09468fc0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller"
:{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k: [truncated 52891 chars]
	I0813 00:05:11.200295  826514 system_pods.go:59] 8 kube-system pods found
	I0813 00:05:11.200361  826514 system_pods.go:61] "coredns-558bd4d5db-sstrb" [16f6c77d-26a2-47e7-9c19-74736961cc13] Running
	I0813 00:05:11.200373  826514 system_pods.go:61] "etcd-multinode-20210813000359-820289" [2d8ff24a-3267-4d8b-a528-3da3d3b70e54] Running
	I0813 00:05:11.200383  826514 system_pods.go:61] "kindnet-rzxjz" [650bf88e-f784-45f9-8943-257e984acedb] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0813 00:05:11.200388  826514 system_pods.go:61] "kube-apiserver-multinode-20210813000359-820289" [b5954b4a-9e51-488b-a0fa-cacb7de86621] Running
	I0813 00:05:11.200395  826514 system_pods.go:61] "kube-controller-manager-multinode-20210813000359-820289" [f25a529b-df04-44a7-aa11-5f04f8acaaf9] Running
	I0813 00:05:11.200399  826514 system_pods.go:61] "kube-proxy-tvtvh" [7b54ef6d-43f6-4dc1-b3be-c5fb1b57a108] Running
	I0813 00:05:11.200403  826514 system_pods.go:61] "kube-scheduler-multinode-20210813000359-820289" [f92e79ae-a806-4356-8c4f-e58f5355dac5] Running
	I0813 00:05:11.200408  826514 system_pods.go:61] "storage-provisioner" [9999a063-d32c-4253-8af3-7c28fdc3c692] Running
	I0813 00:05:11.200413  826514 system_pods.go:74] duration metric: took 184.320486ms to wait for pod list to return data ...
	I0813 00:05:11.200422  826514 default_sa.go:34] waiting for default service account to be created ...
	I0813 00:05:11.393852  826514 request.go:600] Waited for 193.35207ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/default/serviceaccounts
	I0813 00:05:11.393908  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/default/serviceaccounts
	I0813 00:05:11.393917  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:11.393923  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:11.393927  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:11.397481  826514 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 00:05:11.397500  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:11.397506  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:11.397512  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:11.397516  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:11.397521  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:11.397524  826514 round_trippers.go:463]     Content-Length: 304
	I0813 00:05:11.397527  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:11 GMT
	I0813 00:05:11.397548  826514 request.go:1123] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"493"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"9a2cb0aa-37cd-4fff-a946-5c4f2781df23","resourceVersion":"392","creationTimestamp":"2021-08-13T00:05:06Z"},"secrets":[{"name":"default-token-hxs74"}]}]}
	I0813 00:05:11.398146  826514 default_sa.go:45] found service account: "default"
	I0813 00:05:11.398162  826514 default_sa.go:55] duration metric: took 197.736144ms for default service account to be created ...
	I0813 00:05:11.398170  826514 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 00:05:11.593599  826514 request.go:600] Waited for 195.355058ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods
	I0813 00:05:11.593670  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods
	I0813 00:05:11.593679  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:11.593689  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:11.593695  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:11.598564  826514 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0813 00:05:11.598592  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:11.598599  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:11.598603  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:11.598608  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:11 GMT
	I0813 00:05:11.598612  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:11.598621  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:11.599626  826514 request.go:1123] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"493"},"items":[{"metadata":{"name":"coredns-558bd4d5db-sstrb","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"16f6c77d-26a2-47e7-9c19-74736961cc13","resourceVersion":"480","creationTimestamp":"2021-08-13T00:05:06Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"65fbbea7-e9b5-486b-a329-69ff09468fc0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65fbbea7-e9b5-486b-a329-69ff09468fc0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller"
:{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k: [truncated 52891 chars]
	I0813 00:05:11.600873  826514 system_pods.go:86] 8 kube-system pods found
	I0813 00:05:11.600896  826514 system_pods.go:89] "coredns-558bd4d5db-sstrb" [16f6c77d-26a2-47e7-9c19-74736961cc13] Running
	I0813 00:05:11.600903  826514 system_pods.go:89] "etcd-multinode-20210813000359-820289" [2d8ff24a-3267-4d8b-a528-3da3d3b70e54] Running
	I0813 00:05:11.600909  826514 system_pods.go:89] "kindnet-rzxjz" [650bf88e-f784-45f9-8943-257e984acedb] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0813 00:05:11.600916  826514 system_pods.go:89] "kube-apiserver-multinode-20210813000359-820289" [b5954b4a-9e51-488b-a0fa-cacb7de86621] Running
	I0813 00:05:11.600923  826514 system_pods.go:89] "kube-controller-manager-multinode-20210813000359-820289" [f25a529b-df04-44a7-aa11-5f04f8acaaf9] Running
	I0813 00:05:11.600927  826514 system_pods.go:89] "kube-proxy-tvtvh" [7b54ef6d-43f6-4dc1-b3be-c5fb1b57a108] Running
	I0813 00:05:11.600931  826514 system_pods.go:89] "kube-scheduler-multinode-20210813000359-820289" [f92e79ae-a806-4356-8c4f-e58f5355dac5] Running
	I0813 00:05:11.600937  826514 system_pods.go:89] "storage-provisioner" [9999a063-d32c-4253-8af3-7c28fdc3c692] Running
	I0813 00:05:11.600943  826514 system_pods.go:126] duration metric: took 202.769344ms to wait for k8s-apps to be running ...
	I0813 00:05:11.600950  826514 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 00:05:11.600998  826514 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 00:05:11.617083  826514 system_svc.go:56] duration metric: took 16.125263ms WaitForService to wait for kubelet.
	I0813 00:05:11.617103  826514 kubeadm.go:547] duration metric: took 4.414932619s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 00:05:11.617134  826514 node_conditions.go:102] verifying NodePressure condition ...
	I0813 00:05:11.793539  826514 request.go:600] Waited for 176.313476ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes
	I0813 00:05:11.793597  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes
	I0813 00:05:11.793605  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:11.793613  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:11.793623  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:11.796474  826514 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:05:11.796490  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:11.796496  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:11.796501  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:11 GMT
	I0813 00:05:11.796505  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:11.796510  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:11.796514  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:11.796896  826514 request.go:1123] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"493"},"items":[{"metadata":{"name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","resourceVersion":"378","creationTimestamp":"2021-08-13T00:04:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813000359-820289","minikube.k8s.io/updated_at":"2021_08_13T00_04_54_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-mana
ged-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","opera [truncated 6610 chars]
	I0813 00:05:11.797993  826514 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 00:05:11.798022  826514 node_conditions.go:123] node cpu capacity is 2
	I0813 00:05:11.798040  826514 node_conditions.go:105] duration metric: took 180.900531ms to run NodePressure ...
	I0813 00:05:11.798052  826514 start.go:231] waiting for startup goroutines ...
	I0813 00:05:11.800681  826514 out.go:177] 
	I0813 00:05:11.800927  826514 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/config.json ...
	I0813 00:05:11.802911  826514 out.go:177] * Starting node multinode-20210813000359-820289-m02 in cluster multinode-20210813000359-820289
	I0813 00:05:11.802938  826514 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 00:05:11.802955  826514 cache.go:56] Caching tarball of preloaded images
	I0813 00:05:11.803103  826514 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0813 00:05:11.803123  826514 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on crio
	I0813 00:05:11.803196  826514 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/config.json ...
	I0813 00:05:11.803337  826514 cache.go:205] Successfully downloaded all kic artifacts
	I0813 00:05:11.803364  826514 start.go:313] acquiring machines lock for multinode-20210813000359-820289-m02: {Name:mk2d46e46728943fc604570595bb7616469b4e8e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 00:05:11.803420  826514 start.go:317] acquired machines lock for "multinode-20210813000359-820289-m02" in 42.804µs
	I0813 00:05:11.803441  826514 start.go:89] Provisioning new machine with config: &{Name:multinode-20210813000359-820289 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12122/minikube-v1.22.0-1628238775-12122.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3
ClusterName:multinode-20210813000359-820289 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.22 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.21.3 ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0} &{Name:m02 IP: Port:0 KubernetesVersion:v1.21.3 ControlPlane:false
Worker:true}
	I0813 00:05:11.803505  826514 start.go:126] createHost starting for "m02" (driver="kvm2")
	I0813 00:05:11.805398  826514 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0813 00:05:11.805479  826514 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 00:05:11.805513  826514 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 00:05:11.816567  826514 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37579
	I0813 00:05:11.817078  826514 main.go:130] libmachine: () Calling .GetVersion
	I0813 00:05:11.817592  826514 main.go:130] libmachine: Using API Version  1
	I0813 00:05:11.817620  826514 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 00:05:11.818035  826514 main.go:130] libmachine: () Calling .GetMachineName
	I0813 00:05:11.818222  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetMachineName
	I0813 00:05:11.818389  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .DriverName
	I0813 00:05:11.818602  826514 start.go:160] libmachine.API.Create for "multinode-20210813000359-820289" (driver="kvm2")
	I0813 00:05:11.818638  826514 client.go:168] LocalClient.Create starting
	I0813 00:05:11.818676  826514 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem
	I0813 00:05:11.818707  826514 main.go:130] libmachine: Decoding PEM data...
	I0813 00:05:11.818740  826514 main.go:130] libmachine: Parsing certificate...
	I0813 00:05:11.818877  826514 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem
	I0813 00:05:11.818903  826514 main.go:130] libmachine: Decoding PEM data...
	I0813 00:05:11.818921  826514 main.go:130] libmachine: Parsing certificate...
	I0813 00:05:11.819024  826514 main.go:130] libmachine: Running pre-create checks...
	I0813 00:05:11.819041  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .PreCreateCheck
	I0813 00:05:11.819211  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetConfigRaw
	I0813 00:05:11.819705  826514 main.go:130] libmachine: Creating machine...
	I0813 00:05:11.819748  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .Create
	I0813 00:05:11.819880  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Creating KVM machine...
	I0813 00:05:11.822625  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | found existing default KVM network
	I0813 00:05:11.822713  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | found existing private KVM network mk-multinode-20210813000359-820289
	I0813 00:05:11.822830  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Setting up store path in /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813000359-820289-m02 ...
	I0813 00:05:11.822860  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Building disk image from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/iso/minikube-v1.22.0-1628238775-12122.iso
	I0813 00:05:11.822922  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | I0813 00:05:11.822815  826790 common.go:108] Making disk image using store path: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	I0813 00:05:11.822997  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Downloading /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/iso/minikube-v1.22.0-1628238775-12122.iso...
	I0813 00:05:12.028514  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | I0813 00:05:12.028390  826790 common.go:115] Creating ssh key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813000359-820289-m02/id_rsa...
	I0813 00:05:12.219895  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | I0813 00:05:12.219785  826790 common.go:121] Creating raw disk image: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813000359-820289-m02/multinode-20210813000359-820289-m02.rawdisk...
	I0813 00:05:12.219936  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | Writing magic tar header
	I0813 00:05:12.219958  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | Writing SSH key tar header
	I0813 00:05:12.219975  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | I0813 00:05:12.219889  826790 common.go:135] Fixing permissions on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813000359-820289-m02 ...
	I0813 00:05:12.220000  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813000359-820289-m02
	I0813 00:05:12.220039  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813000359-820289-m02 (perms=drwx------)
	I0813 00:05:12.220064  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines
	I0813 00:05:12.220084  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines (perms=drwxr-xr-x)
	I0813 00:05:12.220108  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube (perms=drwxr-xr-x)
	I0813 00:05:12.220126  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b (perms=drwxr-xr-x)
	I0813 00:05:12.220147  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	I0813 00:05:12.220163  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxr-xr-x)
	I0813 00:05:12.220176  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0813 00:05:12.220185  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Creating domain...
	I0813 00:05:12.220211  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b
	I0813 00:05:12.220228  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0813 00:05:12.220241  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | Checking permissions on dir: /home/jenkins
	I0813 00:05:12.220254  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | Checking permissions on dir: /home
	I0813 00:05:12.220267  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | Skipping /home - not owner
	I0813 00:05:12.245446  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:ea:65:50 in network default
	I0813 00:05:12.245932  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Ensuring networks are active...
	I0813 00:05:12.245952  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:12.247884  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Ensuring network default is active
	I0813 00:05:12.248149  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Ensuring network mk-multinode-20210813000359-820289 is active
	I0813 00:05:12.248485  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Getting domain xml...
	I0813 00:05:12.250243  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Creating domain...
	I0813 00:05:12.639993  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Waiting to get IP...
	I0813 00:05:12.641047  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:12.641529  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | unable to find current IP address of domain multinode-20210813000359-820289-m02 in network mk-multinode-20210813000359-820289
	I0813 00:05:12.641569  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | I0813 00:05:12.641507  826790 retry.go:31] will retry after 263.082536ms: waiting for machine to come up
	I0813 00:05:12.905638  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:12.906193  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | unable to find current IP address of domain multinode-20210813000359-820289-m02 in network mk-multinode-20210813000359-820289
	I0813 00:05:12.906224  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | I0813 00:05:12.906146  826790 retry.go:31] will retry after 381.329545ms: waiting for machine to come up
	I0813 00:05:13.288610  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:13.289162  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | unable to find current IP address of domain multinode-20210813000359-820289-m02 in network mk-multinode-20210813000359-820289
	I0813 00:05:13.289189  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | I0813 00:05:13.289101  826790 retry.go:31] will retry after 422.765636ms: waiting for machine to come up
	I0813 00:05:13.713583  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:13.714019  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | unable to find current IP address of domain multinode-20210813000359-820289-m02 in network mk-multinode-20210813000359-820289
	I0813 00:05:13.714055  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | I0813 00:05:13.713959  826790 retry.go:31] will retry after 473.074753ms: waiting for machine to come up
	I0813 00:05:14.188437  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:14.188964  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | unable to find current IP address of domain multinode-20210813000359-820289-m02 in network mk-multinode-20210813000359-820289
	I0813 00:05:14.188990  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | I0813 00:05:14.188917  826790 retry.go:31] will retry after 587.352751ms: waiting for machine to come up
	I0813 00:05:14.777734  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:14.778138  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | unable to find current IP address of domain multinode-20210813000359-820289-m02 in network mk-multinode-20210813000359-820289
	I0813 00:05:14.778162  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | I0813 00:05:14.778084  826790 retry.go:31] will retry after 834.206799ms: waiting for machine to come up
	I0813 00:05:15.614030  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:15.614475  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | unable to find current IP address of domain multinode-20210813000359-820289-m02 in network mk-multinode-20210813000359-820289
	I0813 00:05:15.614520  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | I0813 00:05:15.614389  826790 retry.go:31] will retry after 746.553905ms: waiting for machine to come up
	I0813 00:05:16.362340  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:16.362838  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | unable to find current IP address of domain multinode-20210813000359-820289-m02 in network mk-multinode-20210813000359-820289
	I0813 00:05:16.362872  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | I0813 00:05:16.362798  826790 retry.go:31] will retry after 987.362415ms: waiting for machine to come up
	I0813 00:05:17.351370  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:17.351824  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | unable to find current IP address of domain multinode-20210813000359-820289-m02 in network mk-multinode-20210813000359-820289
	I0813 00:05:17.351854  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | I0813 00:05:17.351780  826790 retry.go:31] will retry after 1.189835008s: waiting for machine to come up
	I0813 00:05:18.543064  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:18.543475  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | unable to find current IP address of domain multinode-20210813000359-820289-m02 in network mk-multinode-20210813000359-820289
	I0813 00:05:18.543506  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | I0813 00:05:18.543430  826790 retry.go:31] will retry after 1.677229867s: waiting for machine to come up
	I0813 00:05:20.223263  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:20.223767  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | unable to find current IP address of domain multinode-20210813000359-820289-m02 in network mk-multinode-20210813000359-820289
	I0813 00:05:20.223801  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | I0813 00:05:20.223687  826790 retry.go:31] will retry after 2.346016261s: waiting for machine to come up
	I0813 00:05:22.571841  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:22.572497  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | unable to find current IP address of domain multinode-20210813000359-820289-m02 in network mk-multinode-20210813000359-820289
	I0813 00:05:22.572531  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | I0813 00:05:22.572432  826790 retry.go:31] will retry after 3.36678925s: waiting for machine to come up
	I0813 00:05:25.942836  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:25.943324  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Found IP for machine: 192.168.39.152
	I0813 00:05:25.943354  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has current primary IP address 192.168.39.152 and MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:25.943370  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Reserving static IP address...
	I0813 00:05:25.943744  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | unable to find host DHCP lease matching {name: "multinode-20210813000359-820289-m02", mac: "52:54:00:9e:b0:8d", ip: "192.168.39.152"} in network mk-multinode-20210813000359-820289
	I0813 00:05:25.990157  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | Getting to WaitForSSH function...
	I0813 00:05:25.990212  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Reserved static IP address: 192.168.39.152
	I0813 00:05:25.990228  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Waiting for SSH to be available...
	I0813 00:05:25.994861  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:25.995212  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b0:8d", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:05:25 +0000 UTC Type:0 Mac:52:54:00:9e:b0:8d Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9e:b0:8d}
	I0813 00:05:25.995230  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined IP address 192.168.39.152 and MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:25.995410  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | Using SSH client type: external
	I0813 00:05:25.995453  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813000359-820289-m02/id_rsa (-rw-------)
	I0813 00:05:25.995501  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.152 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813000359-820289-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0813 00:05:25.995516  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | About to run SSH command:
	I0813 00:05:25.995530  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | exit 0
	I0813 00:05:26.132228  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | SSH cmd err, output: <nil>: 
	I0813 00:05:26.132684  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) KVM machine creation complete!
	I0813 00:05:26.132762  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetConfigRaw
	I0813 00:05:26.133367  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .DriverName
	I0813 00:05:26.133578  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .DriverName
	I0813 00:05:26.133781  826514 main.go:130] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0813 00:05:26.133798  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetState
	I0813 00:05:26.136537  826514 main.go:130] libmachine: Detecting operating system of created instance...
	I0813 00:05:26.136552  826514 main.go:130] libmachine: Waiting for SSH to be available...
	I0813 00:05:26.136558  826514 main.go:130] libmachine: Getting to WaitForSSH function...
	I0813 00:05:26.136567  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHHostname
	I0813 00:05:26.140967  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:26.141279  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b0:8d", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:05:25 +0000 UTC Type:0 Mac:52:54:00:9e:b0:8d Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:multinode-20210813000359-820289-m02 Clientid:01:52:54:00:9e:b0:8d}
	I0813 00:05:26.141309  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined IP address 192.168.39.152 and MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:26.141402  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHPort
	I0813 00:05:26.141580  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHKeyPath
	I0813 00:05:26.141726  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHKeyPath
	I0813 00:05:26.141870  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHUsername
	I0813 00:05:26.142018  826514 main.go:130] libmachine: Using SSH client type: native
	I0813 00:05:26.142177  826514 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0813 00:05:26.142190  826514 main.go:130] libmachine: About to run SSH command:
	exit 0
	I0813 00:05:26.270575  826514 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 00:05:26.270605  826514 main.go:130] libmachine: Detecting the provisioner...
	I0813 00:05:26.270616  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHHostname
	I0813 00:05:26.275878  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:26.276274  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b0:8d", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:05:25 +0000 UTC Type:0 Mac:52:54:00:9e:b0:8d Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:multinode-20210813000359-820289-m02 Clientid:01:52:54:00:9e:b0:8d}
	I0813 00:05:26.276301  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined IP address 192.168.39.152 and MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:26.276458  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHPort
	I0813 00:05:26.276709  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHKeyPath
	I0813 00:05:26.276914  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHKeyPath
	I0813 00:05:26.277031  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHUsername
	I0813 00:05:26.277202  826514 main.go:130] libmachine: Using SSH client type: native
	I0813 00:05:26.277334  826514 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0813 00:05:26.277345  826514 main.go:130] libmachine: About to run SSH command:
	cat /etc/os-release
	I0813 00:05:26.404569  826514 main.go:130] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2020.02.12
	ID=buildroot
	VERSION_ID=2020.02.12
	PRETTY_NAME="Buildroot 2020.02.12"
	
	I0813 00:05:26.404628  826514 main.go:130] libmachine: found compatible host: buildroot
	I0813 00:05:26.404638  826514 main.go:130] libmachine: Provisioning with buildroot...
	I0813 00:05:26.404661  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetMachineName
	I0813 00:05:26.404881  826514 buildroot.go:166] provisioning hostname "multinode-20210813000359-820289-m02"
	I0813 00:05:26.404909  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetMachineName
	I0813 00:05:26.405072  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHHostname
	I0813 00:05:26.409749  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:26.410065  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b0:8d", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:05:25 +0000 UTC Type:0 Mac:52:54:00:9e:b0:8d Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:multinode-20210813000359-820289-m02 Clientid:01:52:54:00:9e:b0:8d}
	I0813 00:05:26.410089  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined IP address 192.168.39.152 and MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:26.410241  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHPort
	I0813 00:05:26.410392  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHKeyPath
	I0813 00:05:26.410567  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHKeyPath
	I0813 00:05:26.410713  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHUsername
	I0813 00:05:26.410897  826514 main.go:130] libmachine: Using SSH client type: native
	I0813 00:05:26.411067  826514 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0813 00:05:26.411085  826514 main.go:130] libmachine: About to run SSH command:
	sudo hostname multinode-20210813000359-820289-m02 && echo "multinode-20210813000359-820289-m02" | sudo tee /etc/hostname
	I0813 00:05:26.548864  826514 main.go:130] libmachine: SSH cmd err, output: <nil>: multinode-20210813000359-820289-m02
	
	I0813 00:05:26.548897  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHHostname
	I0813 00:05:26.553708  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:26.553993  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b0:8d", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:05:25 +0000 UTC Type:0 Mac:52:54:00:9e:b0:8d Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:multinode-20210813000359-820289-m02 Clientid:01:52:54:00:9e:b0:8d}
	I0813 00:05:26.554027  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined IP address 192.168.39.152 and MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:26.554150  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHPort
	I0813 00:05:26.554329  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHKeyPath
	I0813 00:05:26.554483  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHKeyPath
	I0813 00:05:26.554647  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHUsername
	I0813 00:05:26.554817  826514 main.go:130] libmachine: Using SSH client type: native
	I0813 00:05:26.554988  826514 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0813 00:05:26.555019  826514 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-20210813000359-820289-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-20210813000359-820289-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-20210813000359-820289-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 00:05:26.689619  826514 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 00:05:26.689648  826514 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem ServerCertRemotePath
:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube}
	I0813 00:05:26.689666  826514 buildroot.go:174] setting up certificates
	I0813 00:05:26.689674  826514 provision.go:83] configureAuth start
	I0813 00:05:26.689685  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetMachineName
	I0813 00:05:26.689952  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetIP
	I0813 00:05:26.695294  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:26.695641  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b0:8d", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:05:25 +0000 UTC Type:0 Mac:52:54:00:9e:b0:8d Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:multinode-20210813000359-820289-m02 Clientid:01:52:54:00:9e:b0:8d}
	I0813 00:05:26.695674  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined IP address 192.168.39.152 and MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:26.695785  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHHostname
	I0813 00:05:26.700088  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:26.700416  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b0:8d", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:05:25 +0000 UTC Type:0 Mac:52:54:00:9e:b0:8d Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:multinode-20210813000359-820289-m02 Clientid:01:52:54:00:9e:b0:8d}
	I0813 00:05:26.700450  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined IP address 192.168.39.152 and MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:26.700522  826514 provision.go:137] copyHostCerts
	I0813 00:05:26.700558  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem
	I0813 00:05:26.700595  826514 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem, removing ...
	I0813 00:05:26.700618  826514 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem
	I0813 00:05:26.700687  826514 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem (1123 bytes)
	I0813 00:05:26.700764  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem
	I0813 00:05:26.700790  826514 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem, removing ...
	I0813 00:05:26.700799  826514 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem
	I0813 00:05:26.700831  826514 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem (1679 bytes)
	I0813 00:05:26.700879  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem
	I0813 00:05:26.700901  826514 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem, removing ...
	I0813 00:05:26.700910  826514 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem
	I0813 00:05:26.700933  826514 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem (1078 bytes)
	I0813 00:05:26.700984  826514 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem org=jenkins.multinode-20210813000359-820289-m02 san=[192.168.39.152 192.168.39.152 localhost 127.0.0.1 minikube multinode-20210813000359-820289-m02]
	I0813 00:05:26.860935  826514 provision.go:171] copyRemoteCerts
	I0813 00:05:26.860988  826514 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 00:05:26.861018  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHHostname
	I0813 00:05:26.865741  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:26.866063  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b0:8d", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:05:25 +0000 UTC Type:0 Mac:52:54:00:9e:b0:8d Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:multinode-20210813000359-820289-m02 Clientid:01:52:54:00:9e:b0:8d}
	I0813 00:05:26.866097  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined IP address 192.168.39.152 and MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:26.866218  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHPort
	I0813 00:05:26.866376  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHKeyPath
	I0813 00:05:26.866534  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHUsername
	I0813 00:05:26.866680  826514 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813000359-820289-m02/id_rsa Username:docker}
	I0813 00:05:26.959094  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0813 00:05:26.959166  826514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 00:05:26.975755  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0813 00:05:26.975809  826514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes)
	I0813 00:05:26.991937  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0813 00:05:26.991981  826514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0813 00:05:27.008954  826514 provision.go:86] duration metric: configureAuth took 319.268918ms
	I0813 00:05:27.008982  826514 buildroot.go:189] setting minikube options for container-runtime
	I0813 00:05:27.009222  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHHostname
	I0813 00:05:27.014503  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:27.014798  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b0:8d", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:05:25 +0000 UTC Type:0 Mac:52:54:00:9e:b0:8d Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:multinode-20210813000359-820289-m02 Clientid:01:52:54:00:9e:b0:8d}
	I0813 00:05:27.014832  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined IP address 192.168.39.152 and MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:27.014966  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHPort
	I0813 00:05:27.015162  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHKeyPath
	I0813 00:05:27.015325  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHKeyPath
	I0813 00:05:27.015448  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHUsername
	I0813 00:05:27.015578  826514 main.go:130] libmachine: Using SSH client type: native
	I0813 00:05:27.015767  826514 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0813 00:05:27.015787  826514 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 00:05:27.632016  826514 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 00:05:27.632050  826514 main.go:130] libmachine: Checking connection to Docker...
	I0813 00:05:27.632063  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetURL
	I0813 00:05:27.634807  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | Using libvirt version 3000000
	I0813 00:05:27.639902  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:27.640269  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b0:8d", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:05:25 +0000 UTC Type:0 Mac:52:54:00:9e:b0:8d Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:multinode-20210813000359-820289-m02 Clientid:01:52:54:00:9e:b0:8d}
	I0813 00:05:27.640296  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined IP address 192.168.39.152 and MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:27.640439  826514 main.go:130] libmachine: Docker is up and running!
	I0813 00:05:27.640454  826514 main.go:130] libmachine: Reticulating splines...
	I0813 00:05:27.640462  826514 client.go:171] LocalClient.Create took 15.821812553s
	I0813 00:05:27.640485  826514 start.go:168] duration metric: libmachine.API.Create for "multinode-20210813000359-820289" took 15.821884504s
	I0813 00:05:27.640498  826514 start.go:267] post-start starting for "multinode-20210813000359-820289-m02" (driver="kvm2")
	I0813 00:05:27.640507  826514 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 00:05:27.640534  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .DriverName
	I0813 00:05:27.640772  826514 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 00:05:27.640800  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHHostname
	I0813 00:05:27.644885  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:27.645180  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b0:8d", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:05:25 +0000 UTC Type:0 Mac:52:54:00:9e:b0:8d Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:multinode-20210813000359-820289-m02 Clientid:01:52:54:00:9e:b0:8d}
	I0813 00:05:27.645207  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined IP address 192.168.39.152 and MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:27.645315  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHPort
	I0813 00:05:27.645490  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHKeyPath
	I0813 00:05:27.645668  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHUsername
	I0813 00:05:27.645794  826514 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813000359-820289-m02/id_rsa Username:docker}
	I0813 00:05:27.738784  826514 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 00:05:27.743460  826514 command_runner.go:124] > NAME=Buildroot
	I0813 00:05:27.743485  826514 command_runner.go:124] > VERSION=2020.02.12
	I0813 00:05:27.743491  826514 command_runner.go:124] > ID=buildroot
	I0813 00:05:27.743502  826514 command_runner.go:124] > VERSION_ID=2020.02.12
	I0813 00:05:27.743509  826514 command_runner.go:124] > PRETTY_NAME="Buildroot 2020.02.12"
	I0813 00:05:27.743548  826514 info.go:137] Remote host: Buildroot 2020.02.12
	I0813 00:05:27.743563  826514 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/addons for local assets ...
	I0813 00:05:27.743630  826514 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files for local assets ...
	I0813 00:05:27.743770  826514 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/8202892.pem -> 8202892.pem in /etc/ssl/certs
	I0813 00:05:27.743783  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/8202892.pem -> /etc/ssl/certs/8202892.pem
	I0813 00:05:27.743910  826514 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 00:05:27.750704  826514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/8202892.pem --> /etc/ssl/certs/8202892.pem (1708 bytes)
	I0813 00:05:27.767250  826514 start.go:270] post-start completed in 126.737681ms
	I0813 00:05:27.767299  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetConfigRaw
	I0813 00:05:27.767949  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetIP
	I0813 00:05:27.772859  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:27.773128  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b0:8d", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:05:25 +0000 UTC Type:0 Mac:52:54:00:9e:b0:8d Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:multinode-20210813000359-820289-m02 Clientid:01:52:54:00:9e:b0:8d}
	I0813 00:05:27.773161  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined IP address 192.168.39.152 and MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:27.773366  826514 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/config.json ...
	I0813 00:05:27.773544  826514 start.go:129] duration metric: createHost completed in 15.970030387s
	I0813 00:05:27.773566  826514 start.go:80] releasing machines lock for "multinode-20210813000359-820289-m02", held for 15.970128109s
	I0813 00:05:27.773612  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .DriverName
	I0813 00:05:27.773797  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetIP
	I0813 00:05:27.777787  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:27.778095  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b0:8d", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:05:25 +0000 UTC Type:0 Mac:52:54:00:9e:b0:8d Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:multinode-20210813000359-820289-m02 Clientid:01:52:54:00:9e:b0:8d}
	I0813 00:05:27.778126  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined IP address 192.168.39.152 and MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:27.780843  826514 out.go:177] * Found network options:
	I0813 00:05:27.782254  826514 out.go:177]   - NO_PROXY=192.168.39.22
	W0813 00:05:27.782294  826514 proxy.go:118] fail to check proxy env: Error ip not in block
	I0813 00:05:27.782341  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .DriverName
	I0813 00:05:27.782517  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .DriverName
	I0813 00:05:27.782981  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .DriverName
	W0813 00:05:27.783164  826514 proxy.go:118] fail to check proxy env: Error ip not in block
	I0813 00:05:27.783208  826514 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 00:05:27.783282  826514 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 00:05:27.783330  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHHostname
	I0813 00:05:27.783282  826514 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 00:05:27.783388  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHHostname
	I0813 00:05:27.790002  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:27.790035  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:27.790337  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b0:8d", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:05:25 +0000 UTC Type:0 Mac:52:54:00:9e:b0:8d Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:multinode-20210813000359-820289-m02 Clientid:01:52:54:00:9e:b0:8d}
	I0813 00:05:27.790373  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined IP address 192.168.39.152 and MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:27.790440  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b0:8d", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:05:25 +0000 UTC Type:0 Mac:52:54:00:9e:b0:8d Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:multinode-20210813000359-820289-m02 Clientid:01:52:54:00:9e:b0:8d}
	I0813 00:05:27.790469  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined IP address 192.168.39.152 and MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:27.790512  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHPort
	I0813 00:05:27.790640  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHPort
	I0813 00:05:27.790725  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHKeyPath
	I0813 00:05:27.790790  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHKeyPath
	I0813 00:05:27.790853  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHUsername
	I0813 00:05:27.790905  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHUsername
	I0813 00:05:27.790955  826514 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813000359-820289-m02/id_rsa Username:docker}
	I0813 00:05:27.790991  826514 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813000359-820289-m02/id_rsa Username:docker}
	I0813 00:05:31.875217  826514 command_runner.go:124] > {
	I0813 00:05:31.875247  826514 command_runner.go:124] >   "images": [
	I0813 00:05:31.875254  826514 command_runner.go:124] >   ]
	I0813 00:05:31.875259  826514 command_runner.go:124] > }
	I0813 00:05:31.876323  826514 command_runner.go:124] > <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
	I0813 00:05:31.876343  826514 command_runner.go:124] > <TITLE>302 Moved</TITLE></HEAD><BODY>
	I0813 00:05:31.876349  826514 command_runner.go:124] > <H1>302 Moved</H1>
	I0813 00:05:31.876354  826514 command_runner.go:124] > The document has moved
	I0813 00:05:31.876364  826514 command_runner.go:124] > <A HREF="https://cloud.google.com/container-registry/">here</A>.
	I0813 00:05:31.876369  826514 command_runner.go:124] > </BODY></HTML>
	I0813 00:05:31.876405  826514 ssh_runner.go:189] Completed: curl -sS -m 2 https://k8s.gcr.io/: (4.093100139s)
	I0813 00:05:31.876506  826514 command_runner.go:124] ! time="2021-08-13T00:05:27Z" level=warning msg="image connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	I0813 00:05:31.876524  826514 command_runner.go:124] ! time="2021-08-13T00:05:29Z" level=error msg="connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	I0813 00:05:31.876544  826514 command_runner.go:124] ! time="2021-08-13T00:05:31Z" level=error msg="connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	I0813 00:05:31.876579  826514 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.093195447s)
	I0813 00:05:31.876616  826514 crio.go:420] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.21.3". assuming images are not preloaded.
	I0813 00:05:31.876669  826514 ssh_runner.go:149] Run: which lz4
	I0813 00:05:31.880822  826514 command_runner.go:124] > /bin/lz4
	I0813 00:05:31.881074  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0813 00:05:31.881166  826514 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0813 00:05:31.885591  826514 command_runner.go:124] ! stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0813 00:05:31.885629  826514 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0813 00:05:31.885658  826514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (576184326 bytes)
	I0813 00:05:34.202842  826514 crio.go:362] Took 2.321704 seconds to copy over tarball
	I0813 00:05:34.202919  826514 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0813 00:05:39.218531  826514 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (5.015578535s)
	I0813 00:05:39.218565  826514 crio.go:369] Took 5.015691 seconds t extract the tarball
	I0813 00:05:39.218576  826514 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0813 00:05:39.260566  826514 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 00:05:39.273464  826514 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 00:05:39.283375  826514 docker.go:153] disabling docker service ...
	I0813 00:05:39.283428  826514 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 00:05:39.294353  826514 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 00:05:39.303091  826514 command_runner.go:124] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0813 00:05:39.303316  826514 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 00:05:39.312741  826514 command_runner.go:124] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0813 00:05:39.439475  826514 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 00:05:39.575584  826514 command_runner.go:124] ! Unit docker.service does not exist, proceeding anyway.
	I0813 00:05:39.575617  826514 command_runner.go:124] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0813 00:05:39.575726  826514 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 00:05:39.585566  826514 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 00:05:39.598735  826514 command_runner.go:124] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0813 00:05:39.598758  826514 command_runner.go:124] > image-endpoint: unix:///var/run/crio/crio.sock
	I0813 00:05:39.598822  826514 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0813 00:05:39.606200  826514 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0813 00:05:39.606221  826514 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0813 00:05:39.613697  826514 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 00:05:39.620204  826514 command_runner.go:124] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 00:05:39.620468  826514 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 00:05:39.620513  826514 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 00:05:39.634564  826514 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 00:05:39.641333  826514 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 00:05:39.785855  826514 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 00:05:40.054895  826514 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 00:05:40.054976  826514 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 00:05:40.059499  826514 command_runner.go:124] >   File: /var/run/crio/crio.sock
	I0813 00:05:40.059522  826514 command_runner.go:124] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0813 00:05:40.059531  826514 command_runner.go:124] > Device: 14h/20d	Inode: 29533       Links: 1
	I0813 00:05:40.059538  826514 command_runner.go:124] > Access: (0755/srwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0813 00:05:40.059543  826514 command_runner.go:124] > Access: 2021-08-13 00:05:31.821340619 +0000
	I0813 00:05:40.059551  826514 command_runner.go:124] > Modify: 2021-08-13 00:05:27.527499015 +0000
	I0813 00:05:40.059557  826514 command_runner.go:124] > Change: 2021-08-13 00:05:27.527499015 +0000
	I0813 00:05:40.059560  826514 command_runner.go:124] >  Birth: -
	I0813 00:05:40.060000  826514 start.go:417] Will wait 60s for crictl version
	I0813 00:05:40.060059  826514 ssh_runner.go:149] Run: sudo crictl version
	I0813 00:05:40.091311  826514 command_runner.go:124] > Version:  0.1.0
	I0813 00:05:40.091333  826514 command_runner.go:124] > RuntimeName:  cri-o
	I0813 00:05:40.091344  826514 command_runner.go:124] > RuntimeVersion:  1.20.2
	I0813 00:05:40.091353  826514 command_runner.go:124] > RuntimeApiVersion:  v1alpha1
	I0813 00:05:40.092257  826514 start.go:426] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.2
	RuntimeApiVersion:  v1alpha1
	I0813 00:05:40.092340  826514 ssh_runner.go:149] Run: crio --version
	I0813 00:05:40.338752  826514 command_runner.go:124] > crio version 1.20.2
	I0813 00:05:40.338779  826514 command_runner.go:124] > Version:       1.20.2
	I0813 00:05:40.338786  826514 command_runner.go:124] > GitCommit:     d5a999ad0a35d895ded554e1e18c142075501a98
	I0813 00:05:40.338791  826514 command_runner.go:124] > GitTreeState:  clean
	I0813 00:05:40.338798  826514 command_runner.go:124] > BuildDate:     2021-08-06T09:19:16Z
	I0813 00:05:40.338803  826514 command_runner.go:124] > GoVersion:     go1.13.15
	I0813 00:05:40.338809  826514 command_runner.go:124] > Compiler:      gc
	I0813 00:05:40.338817  826514 command_runner.go:124] > Platform:      linux/amd64
	I0813 00:05:40.340459  826514 command_runner.go:124] ! time="2021-08-13T00:05:40Z" level=info msg="Starting CRI-O, version: 1.20.2, git: d5a999ad0a35d895ded554e1e18c142075501a98(clean)"
	I0813 00:05:40.340545  826514 ssh_runner.go:149] Run: crio --version
	I0813 00:05:40.634279  826514 command_runner.go:124] > crio version 1.20.2
	I0813 00:05:40.634306  826514 command_runner.go:124] > Version:       1.20.2
	I0813 00:05:40.634316  826514 command_runner.go:124] > GitCommit:     d5a999ad0a35d895ded554e1e18c142075501a98
	I0813 00:05:40.634321  826514 command_runner.go:124] > GitTreeState:  clean
	I0813 00:05:40.634335  826514 command_runner.go:124] > BuildDate:     2021-08-06T09:19:16Z
	I0813 00:05:40.634342  826514 command_runner.go:124] > GoVersion:     go1.13.15
	I0813 00:05:40.634348  826514 command_runner.go:124] > Compiler:      gc
	I0813 00:05:40.634355  826514 command_runner.go:124] > Platform:      linux/amd64
	I0813 00:05:40.635480  826514 command_runner.go:124] ! time="2021-08-13T00:05:40Z" level=info msg="Starting CRI-O, version: 1.20.2, git: d5a999ad0a35d895ded554e1e18c142075501a98(clean)"
	I0813 00:05:43.433000  826514 out.go:177] * Preparing Kubernetes v1.21.3 on CRI-O 1.20.2 ...
	I0813 00:05:43.435220  826514 out.go:177]   - env NO_PROXY=192.168.39.22
	I0813 00:05:43.435265  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetIP
	I0813 00:05:43.440935  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:43.441303  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b0:8d", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:05:25 +0000 UTC Type:0 Mac:52:54:00:9e:b0:8d Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:multinode-20210813000359-820289-m02 Clientid:01:52:54:00:9e:b0:8d}
	I0813 00:05:43.441336  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined IP address 192.168.39.152 and MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:43.441535  826514 ssh_runner.go:149] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0813 00:05:43.446316  826514 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 00:05:43.457318  826514 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289 for IP: 192.168.39.152
	I0813 00:05:43.457364  826514 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key
	I0813 00:05:43.457381  826514 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key
	I0813 00:05:43.457393  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0813 00:05:43.457408  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0813 00:05:43.457420  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0813 00:05:43.457431  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0813 00:05:43.457487  826514 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/820289.pem (1338 bytes)
	W0813 00:05:43.457527  826514 certs.go:369] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/820289_empty.pem, impossibly tiny 0 bytes
	I0813 00:05:43.457540  826514 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 00:05:43.457566  826514 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem (1078 bytes)
	I0813 00:05:43.457592  826514 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem (1123 bytes)
	I0813 00:05:43.457615  826514 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem (1679 bytes)
	I0813 00:05:43.457664  826514 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/8202892.pem (1708 bytes)
	I0813 00:05:43.457699  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0813 00:05:43.457712  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/820289.pem -> /usr/share/ca-certificates/820289.pem
	I0813 00:05:43.457723  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/8202892.pem -> /usr/share/ca-certificates/8202892.pem
	I0813 00:05:43.458108  826514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 00:05:43.475819  826514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0813 00:05:43.492168  826514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 00:05:43.507680  826514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 00:05:43.524777  826514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 00:05:43.541295  826514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/820289.pem --> /usr/share/ca-certificates/820289.pem (1338 bytes)
	I0813 00:05:43.557434  826514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/8202892.pem --> /usr/share/ca-certificates/8202892.pem (1708 bytes)
	I0813 00:05:43.573549  826514 ssh_runner.go:149] Run: openssl version
	I0813 00:05:43.578939  826514 command_runner.go:124] > OpenSSL 1.1.1k  25 Mar 2021
	I0813 00:05:43.579003  826514 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8202892.pem && ln -fs /usr/share/ca-certificates/8202892.pem /etc/ssl/certs/8202892.pem"
	I0813 00:05:43.586452  826514 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8202892.pem
	I0813 00:05:43.590618  826514 command_runner.go:124] > -rw-r--r-- 1 root root 1708 Aug 12 23:59 /usr/share/ca-certificates/8202892.pem
	I0813 00:05:43.591039  826514 certs.go:416] hashing: -rw-r--r-- 1 root root 1708 Aug 12 23:59 /usr/share/ca-certificates/8202892.pem
	I0813 00:05:43.591079  826514 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8202892.pem
	I0813 00:05:43.596423  826514 command_runner.go:124] > 3ec20f2e
	I0813 00:05:43.596785  826514 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8202892.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 00:05:43.604354  826514 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 00:05:43.611690  826514 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 00:05:43.616528  826514 command_runner.go:124] > -rw-r--r-- 1 root root 1111 Aug 12 23:51 /usr/share/ca-certificates/minikubeCA.pem
	I0813 00:05:43.616845  826514 certs.go:416] hashing: -rw-r--r-- 1 root root 1111 Aug 12 23:51 /usr/share/ca-certificates/minikubeCA.pem
	I0813 00:05:43.616888  826514 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 00:05:43.622478  826514 command_runner.go:124] > b5213941
	I0813 00:05:43.622519  826514 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 00:05:43.630072  826514 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/820289.pem && ln -fs /usr/share/ca-certificates/820289.pem /etc/ssl/certs/820289.pem"
	I0813 00:05:43.638062  826514 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/820289.pem
	I0813 00:05:43.642625  826514 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Aug 12 23:59 /usr/share/ca-certificates/820289.pem
	I0813 00:05:43.642646  826514 certs.go:416] hashing: -rw-r--r-- 1 root root 1338 Aug 12 23:59 /usr/share/ca-certificates/820289.pem
	I0813 00:05:43.642677  826514 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/820289.pem
	I0813 00:05:43.648215  826514 command_runner.go:124] > 51391683
	I0813 00:05:43.648279  826514 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/820289.pem /etc/ssl/certs/51391683.0"
	I0813 00:05:43.655779  826514 ssh_runner.go:149] Run: crio config
	I0813 00:05:43.873484  826514 command_runner.go:124] ! time="2021-08-13T00:05:43Z" level=info msg="Starting CRI-O, version: 1.20.2, git: d5a999ad0a35d895ded554e1e18c142075501a98(clean)"
	I0813 00:05:43.874762  826514 command_runner.go:124] ! time="2021-08-13T00:05:43Z" level=warning msg="The 'registries' option in crio.conf(5) (referenced in \"/etc/crio/crio.conf\") has been deprecated and will be removed with CRI-O 1.21."
	I0813 00:05:43.874793  826514 command_runner.go:124] ! time="2021-08-13T00:05:43Z" level=warning msg="Please refer to containers-registries.conf(5) for configuring unqualified-search registries."
	I0813 00:05:43.876991  826514 command_runner.go:124] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0813 00:05:43.879382  826514 command_runner.go:124] > # The CRI-O configuration file specifies all of the available configuration
	I0813 00:05:43.879398  826514 command_runner.go:124] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0813 00:05:43.879405  826514 command_runner.go:124] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0813 00:05:43.879408  826514 command_runner.go:124] > #
	I0813 00:05:43.879416  826514 command_runner.go:124] > # Please refer to crio.conf(5) for details of all configuration options.
	I0813 00:05:43.879427  826514 command_runner.go:124] > # CRI-O supports partial configuration reload during runtime, which can be
	I0813 00:05:43.879440  826514 command_runner.go:124] > # done by sending SIGHUP to the running process. Currently supported options
	I0813 00:05:43.879454  826514 command_runner.go:124] > # are explicitly mentioned with: 'This option supports live configuration
	I0813 00:05:43.879464  826514 command_runner.go:124] > # reload'.
	I0813 00:05:43.879475  826514 command_runner.go:124] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0813 00:05:43.879488  826514 command_runner.go:124] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0813 00:05:43.879501  826514 command_runner.go:124] > # you want to change the system's defaults. If you want to modify storage just
	I0813 00:05:43.879513  826514 command_runner.go:124] > # for CRI-O, you can change the storage configuration options here.
	I0813 00:05:43.879521  826514 command_runner.go:124] > [crio]
	I0813 00:05:43.879531  826514 command_runner.go:124] > # Path to the "root directory". CRI-O stores all of its data, including
	I0813 00:05:43.879541  826514 command_runner.go:124] > # containers images, in this directory.
	I0813 00:05:43.879550  826514 command_runner.go:124] > #root = "/var/lib/containers/storage"
	I0813 00:05:43.879566  826514 command_runner.go:124] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0813 00:05:43.879576  826514 command_runner.go:124] > #runroot = "/var/run/containers/storage"
	I0813 00:05:43.879590  826514 command_runner.go:124] > # Storage driver used to manage the storage of images and containers. Please
	I0813 00:05:43.879602  826514 command_runner.go:124] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0813 00:05:43.879612  826514 command_runner.go:124] > #storage_driver = "overlay"
	I0813 00:05:43.879621  826514 command_runner.go:124] > # List to pass options to the storage driver. Please refer to
	I0813 00:05:43.879630  826514 command_runner.go:124] > # containers-storage.conf(5) to see all available storage options.
	I0813 00:05:43.879635  826514 command_runner.go:124] > #storage_option = [
	I0813 00:05:43.879639  826514 command_runner.go:124] > #]
	I0813 00:05:43.879646  826514 command_runner.go:124] > # The default log directory where all logs will go unless directly specified by
	I0813 00:05:43.879654  826514 command_runner.go:124] > # the kubelet. The log directory specified must be an absolute directory.
	I0813 00:05:43.879658  826514 command_runner.go:124] > log_dir = "/var/log/crio/pods"
	I0813 00:05:43.879666  826514 command_runner.go:124] > # Location for CRI-O to lay down the temporary version file.
	I0813 00:05:43.879673  826514 command_runner.go:124] > # It is used to check if crio wipe should wipe containers, which should
	I0813 00:05:43.879679  826514 command_runner.go:124] > # always happen on a node reboot
	I0813 00:05:43.879684  826514 command_runner.go:124] > version_file = "/var/run/crio/version"
	I0813 00:05:43.879694  826514 command_runner.go:124] > # Location for CRI-O to lay down the persistent version file.
	I0813 00:05:43.879700  826514 command_runner.go:124] > # It is used to check if crio wipe should wipe images, which should
	I0813 00:05:43.879721  826514 command_runner.go:124] > # only happen when CRI-O has been upgraded
	I0813 00:05:43.879730  826514 command_runner.go:124] > version_file_persist = "/var/lib/crio/version"
	I0813 00:05:43.879737  826514 command_runner.go:124] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0813 00:05:43.879743  826514 command_runner.go:124] > [crio.api]
	I0813 00:05:43.879748  826514 command_runner.go:124] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0813 00:05:43.879753  826514 command_runner.go:124] > listen = "/var/run/crio/crio.sock"
	I0813 00:05:43.879759  826514 command_runner.go:124] > # IP address on which the stream server will listen.
	I0813 00:05:43.879765  826514 command_runner.go:124] > stream_address = "127.0.0.1"
	I0813 00:05:43.879772  826514 command_runner.go:124] > # The port on which the stream server will listen. If the port is set to "0", then
	I0813 00:05:43.879778  826514 command_runner.go:124] > # CRI-O will allocate a random free port number.
	I0813 00:05:43.879782  826514 command_runner.go:124] > stream_port = "0"
	I0813 00:05:43.879787  826514 command_runner.go:124] > # Enable encrypted TLS transport of the stream server.
	I0813 00:05:43.879793  826514 command_runner.go:124] > stream_enable_tls = false
	I0813 00:05:43.879801  826514 command_runner.go:124] > # Length of time until open streams terminate due to lack of activity
	I0813 00:05:43.879807  826514 command_runner.go:124] > stream_idle_timeout = ""
	I0813 00:05:43.879814  826514 command_runner.go:124] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0813 00:05:43.879822  826514 command_runner.go:124] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0813 00:05:43.879826  826514 command_runner.go:124] > # minutes.
	I0813 00:05:43.879830  826514 command_runner.go:124] > stream_tls_cert = ""
	I0813 00:05:43.879838  826514 command_runner.go:124] > # Path to the key file used to serve the encrypted stream. This file can
	I0813 00:05:43.879844  826514 command_runner.go:124] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0813 00:05:43.879852  826514 command_runner.go:124] > stream_tls_key = ""
	I0813 00:05:43.879860  826514 command_runner.go:124] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0813 00:05:43.879870  826514 command_runner.go:124] > # communication with the encrypted stream. This file can change and CRI-O will
	I0813 00:05:43.879877  826514 command_runner.go:124] > # automatically pick up the changes within 5 minutes.
	I0813 00:05:43.879881  826514 command_runner.go:124] > stream_tls_ca = ""
	I0813 00:05:43.879891  826514 command_runner.go:124] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0813 00:05:43.879897  826514 command_runner.go:124] > grpc_max_send_msg_size = 16777216
	I0813 00:05:43.879905  826514 command_runner.go:124] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0813 00:05:43.879911  826514 command_runner.go:124] > grpc_max_recv_msg_size = 16777216
	I0813 00:05:43.879918  826514 command_runner.go:124] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0813 00:05:43.879927  826514 command_runner.go:124] > # and options for how to set up and manage the OCI runtime.
	I0813 00:05:43.879930  826514 command_runner.go:124] > [crio.runtime]
	I0813 00:05:43.879936  826514 command_runner.go:124] > # A list of ulimits to be set in containers by default, specified as
	I0813 00:05:43.879943  826514 command_runner.go:124] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0813 00:05:43.879947  826514 command_runner.go:124] > # "nofile=1024:2048"
	I0813 00:05:43.879954  826514 command_runner.go:124] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0813 00:05:43.879959  826514 command_runner.go:124] > #default_ulimits = [
	I0813 00:05:43.879963  826514 command_runner.go:124] > #]
	I0813 00:05:43.879969  826514 command_runner.go:124] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0813 00:05:43.879976  826514 command_runner.go:124] > no_pivot = false
	I0813 00:05:43.879982  826514 command_runner.go:124] > # decryption_keys_path is the path where the keys required for
	I0813 00:05:43.880040  826514 command_runner.go:124] > # image decryption are stored. This option supports live configuration reload.
	I0813 00:05:43.880050  826514 command_runner.go:124] > decryption_keys_path = "/etc/crio/keys/"
	I0813 00:05:43.880056  826514 command_runner.go:124] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0813 00:05:43.880061  826514 command_runner.go:124] > # Will be searched for using $PATH if empty.
	I0813 00:05:43.880065  826514 command_runner.go:124] > conmon = "/usr/libexec/crio/conmon"
	I0813 00:05:43.880071  826514 command_runner.go:124] > # Cgroup setting for conmon
	I0813 00:05:43.880075  826514 command_runner.go:124] > conmon_cgroup = "system.slice"
	I0813 00:05:43.880082  826514 command_runner.go:124] > # Environment variable list for the conmon process, used for passing necessary
	I0813 00:05:43.880089  826514 command_runner.go:124] > # environment variables to conmon or the runtime.
	I0813 00:05:43.880093  826514 command_runner.go:124] > conmon_env = [
	I0813 00:05:43.880100  826514 command_runner.go:124] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0813 00:05:43.880103  826514 command_runner.go:124] > ]
	I0813 00:05:43.880109  826514 command_runner.go:124] > # Additional environment variables to set for all the
	I0813 00:05:43.880117  826514 command_runner.go:124] > # containers. These are overridden if set in the
	I0813 00:05:43.880123  826514 command_runner.go:124] > # container image spec or in the container runtime configuration.
	I0813 00:05:43.880129  826514 command_runner.go:124] > default_env = [
	I0813 00:05:43.880132  826514 command_runner.go:124] > ]
	I0813 00:05:43.880139  826514 command_runner.go:124] > # If true, SELinux will be used for pod separation on the host.
	I0813 00:05:43.880145  826514 command_runner.go:124] > selinux = false
	I0813 00:05:43.880151  826514 command_runner.go:124] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0813 00:05:43.880157  826514 command_runner.go:124] > # for the runtime. If not specified, then the internal default seccomp profile
	I0813 00:05:43.880165  826514 command_runner.go:124] > # will be used. This option supports live configuration reload.
	I0813 00:05:43.880169  826514 command_runner.go:124] > seccomp_profile = ""
	I0813 00:05:43.880176  826514 command_runner.go:124] > # Changes the meaning of an empty seccomp profile. By default
	I0813 00:05:43.880182  826514 command_runner.go:124] > # (and according to CRI spec), an empty profile means unconfined.
	I0813 00:05:43.880190  826514 command_runner.go:124] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0813 00:05:43.880195  826514 command_runner.go:124] > # which might increase security.
	I0813 00:05:43.880206  826514 command_runner.go:124] > seccomp_use_default_when_empty = false
	I0813 00:05:43.880216  826514 command_runner.go:124] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0813 00:05:43.880223  826514 command_runner.go:124] > # profile name is "crio-default". This profile only takes effect if the user
	I0813 00:05:43.880231  826514 command_runner.go:124] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0813 00:05:43.880239  826514 command_runner.go:124] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0813 00:05:43.880246  826514 command_runner.go:124] > # This option supports live configuration reload.
	I0813 00:05:43.880251  826514 command_runner.go:124] > apparmor_profile = "crio-default"
	I0813 00:05:43.880258  826514 command_runner.go:124] > # Used to change irqbalance service config file path which is used for configuring
	I0813 00:05:43.880263  826514 command_runner.go:124] > # irqbalance daemon.
	I0813 00:05:43.880268  826514 command_runner.go:124] > irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0813 00:05:43.880276  826514 command_runner.go:124] > # Cgroup management implementation used for the runtime.
	I0813 00:05:43.880280  826514 command_runner.go:124] > cgroup_manager = "systemd"
	I0813 00:05:43.880286  826514 command_runner.go:124] > # Specify whether the image pull must be performed in a separate cgroup.
	I0813 00:05:43.880292  826514 command_runner.go:124] > separate_pull_cgroup = ""
	I0813 00:05:43.880299  826514 command_runner.go:124] > # List of default capabilities for containers. If it is empty or commented out,
	I0813 00:05:43.880308  826514 command_runner.go:124] > # only the capabilities defined in the containers json file by the user/kube
	I0813 00:05:43.880311  826514 command_runner.go:124] > # will be added.
	I0813 00:05:43.880317  826514 command_runner.go:124] > default_capabilities = [
	I0813 00:05:43.880321  826514 command_runner.go:124] > 	"CHOWN",
	I0813 00:05:43.880324  826514 command_runner.go:124] > 	"DAC_OVERRIDE",
	I0813 00:05:43.880329  826514 command_runner.go:124] > 	"FSETID",
	I0813 00:05:43.880332  826514 command_runner.go:124] > 	"FOWNER",
	I0813 00:05:43.880337  826514 command_runner.go:124] > 	"SETGID",
	I0813 00:05:43.880340  826514 command_runner.go:124] > 	"SETUID",
	I0813 00:05:43.880345  826514 command_runner.go:124] > 	"SETPCAP",
	I0813 00:05:43.880350  826514 command_runner.go:124] > 	"NET_BIND_SERVICE",
	I0813 00:05:43.880354  826514 command_runner.go:124] > 	"KILL",
	I0813 00:05:43.880357  826514 command_runner.go:124] > ]
	I0813 00:05:43.880364  826514 command_runner.go:124] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0813 00:05:43.880371  826514 command_runner.go:124] > # defined in the container json file by the user/kube will be added.
	I0813 00:05:43.880377  826514 command_runner.go:124] > default_sysctls = [
	I0813 00:05:43.880380  826514 command_runner.go:124] > ]
	I0813 00:05:43.880387  826514 command_runner.go:124] > # List of additional devices. specified as
	I0813 00:05:43.880395  826514 command_runner.go:124] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0813 00:05:43.880403  826514 command_runner.go:124] > #If it is empty or commented out, only the devices
	I0813 00:05:43.880409  826514 command_runner.go:124] > # defined in the container json file by the user/kube will be added.
	I0813 00:05:43.880415  826514 command_runner.go:124] > additional_devices = [
	I0813 00:05:43.880418  826514 command_runner.go:124] > ]
	I0813 00:05:43.880426  826514 command_runner.go:124] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0813 00:05:43.880434  826514 command_runner.go:124] > # directories does not exist, then CRI-O will automatically skip them.
	I0813 00:05:43.880437  826514 command_runner.go:124] > hooks_dir = [
	I0813 00:05:43.880442  826514 command_runner.go:124] > 	"/usr/share/containers/oci/hooks.d",
	I0813 00:05:43.880445  826514 command_runner.go:124] > ]
	I0813 00:05:43.880451  826514 command_runner.go:124] > # Path to the file specifying the defaults mounts for each container. The
	I0813 00:05:43.880459  826514 command_runner.go:124] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0813 00:05:43.880468  826514 command_runner.go:124] > # its default mounts from the following two files:
	I0813 00:05:43.880472  826514 command_runner.go:124] > #
	I0813 00:05:43.880479  826514 command_runner.go:124] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0813 00:05:43.880488  826514 command_runner.go:124] > #      override file, where users can either add in their own default mounts, or
	I0813 00:05:43.880493  826514 command_runner.go:124] > #      override the default mounts shipped with the package.
	I0813 00:05:43.880499  826514 command_runner.go:124] > #
	I0813 00:05:43.880505  826514 command_runner.go:124] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0813 00:05:43.880515  826514 command_runner.go:124] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0813 00:05:43.880521  826514 command_runner.go:124] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0813 00:05:43.880529  826514 command_runner.go:124] > #      only add mounts it finds in this file.
	I0813 00:05:43.880532  826514 command_runner.go:124] > #
	I0813 00:05:43.880536  826514 command_runner.go:124] > #default_mounts_file = ""
	I0813 00:05:43.880541  826514 command_runner.go:124] > # Maximum number of processes allowed in a container.
	I0813 00:05:43.880546  826514 command_runner.go:124] > pids_limit = 1024
	I0813 00:05:43.880552  826514 command_runner.go:124] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0813 00:05:43.880561  826514 command_runner.go:124] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0813 00:05:43.880568  826514 command_runner.go:124] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0813 00:05:43.880577  826514 command_runner.go:124] > # limit is never exceeded.
	I0813 00:05:43.880581  826514 command_runner.go:124] > log_size_max = -1
	I0813 00:05:43.880608  826514 command_runner.go:124] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0813 00:05:43.880619  826514 command_runner.go:124] > log_to_journald = false
	I0813 00:05:43.880631  826514 command_runner.go:124] > # Path to directory in which container exit files are written to by conmon.
	I0813 00:05:43.880641  826514 command_runner.go:124] > container_exits_dir = "/var/run/crio/exits"
	I0813 00:05:43.880650  826514 command_runner.go:124] > # Path to directory for container attach sockets.
	I0813 00:05:43.880661  826514 command_runner.go:124] > container_attach_socket_dir = "/var/run/crio"
	I0813 00:05:43.880673  826514 command_runner.go:124] > # The prefix to use for the source of the bind mounts.
	I0813 00:05:43.880683  826514 command_runner.go:124] > bind_mount_prefix = ""
	I0813 00:05:43.880693  826514 command_runner.go:124] > # If set to true, all containers will run in read-only mode.
	I0813 00:05:43.880702  826514 command_runner.go:124] > read_only = false
	I0813 00:05:43.880714  826514 command_runner.go:124] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0813 00:05:43.880727  826514 command_runner.go:124] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0813 00:05:43.880737  826514 command_runner.go:124] > # live configuration reload.
	I0813 00:05:43.880744  826514 command_runner.go:124] > log_level = "info"
	I0813 00:05:43.880753  826514 command_runner.go:124] > # Filter the log messages by the provided regular expression.
	I0813 00:05:43.880763  826514 command_runner.go:124] > # This option supports live configuration reload.
	I0813 00:05:43.880770  826514 command_runner.go:124] > log_filter = ""
	I0813 00:05:43.880781  826514 command_runner.go:124] > # The UID mappings for the user namespace of each container. A range is
	I0813 00:05:43.880794  826514 command_runner.go:124] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0813 00:05:43.880803  826514 command_runner.go:124] > # separated by comma.
	I0813 00:05:43.880810  826514 command_runner.go:124] > uid_mappings = ""
	I0813 00:05:43.880819  826514 command_runner.go:124] > # The GID mappings for the user namespace of each container. A range is
	I0813 00:05:43.880831  826514 command_runner.go:124] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0813 00:05:43.880840  826514 command_runner.go:124] > # separated by comma.
	I0813 00:05:43.880846  826514 command_runner.go:124] > gid_mappings = ""
	I0813 00:05:43.880858  826514 command_runner.go:124] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0813 00:05:43.880869  826514 command_runner.go:124] > # regarding the proper termination of the container. The lowest possible
	I0813 00:05:43.880881  826514 command_runner.go:124] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0813 00:05:43.880888  826514 command_runner.go:124] > ctr_stop_timeout = 30
	I0813 00:05:43.880899  826514 command_runner.go:124] > # manage_ns_lifecycle determines whether we pin and remove namespaces
	I0813 00:05:43.880910  826514 command_runner.go:124] > # and manage their lifecycle.
	I0813 00:05:43.880924  826514 command_runner.go:124] > # This option is being deprecated, and will be unconditionally true in the future.
	I0813 00:05:43.880934  826514 command_runner.go:124] > manage_ns_lifecycle = true
	I0813 00:05:43.880945  826514 command_runner.go:124] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0813 00:05:43.880957  826514 command_runner.go:124] > # when a pod does not have a private PID namespace, and does not use
	I0813 00:05:43.880967  826514 command_runner.go:124] > # a kernel separating runtime (like kata).
	I0813 00:05:43.880972  826514 command_runner.go:124] > # It requires manage_ns_lifecycle to be true.
	I0813 00:05:43.880978  826514 command_runner.go:124] > drop_infra_ctr = false
	I0813 00:05:43.880985  826514 command_runner.go:124] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0813 00:05:43.880994  826514 command_runner.go:124] > # You can use linux CPU list format to specify desired CPUs.
	I0813 00:05:43.881002  826514 command_runner.go:124] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0813 00:05:43.881008  826514 command_runner.go:124] > # infra_ctr_cpuset = ""
	I0813 00:05:43.881015  826514 command_runner.go:124] > # The directory where the state of the managed namespaces gets tracked.
	I0813 00:05:43.881023  826514 command_runner.go:124] > # Only used when manage_ns_lifecycle is true.
	I0813 00:05:43.881027  826514 command_runner.go:124] > namespaces_dir = "/var/run"
	I0813 00:05:43.881038  826514 command_runner.go:124] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0813 00:05:43.881045  826514 command_runner.go:124] > pinns_path = "/usr/bin/pinns"
	I0813 00:05:43.881051  826514 command_runner.go:124] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0813 00:05:43.881060  826514 command_runner.go:124] > # The name is matched against the runtimes map below. If this value is changed,
	I0813 00:05:43.881068  826514 command_runner.go:124] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0813 00:05:43.881073  826514 command_runner.go:124] > default_runtime = "runc"
	I0813 00:05:43.881081  826514 command_runner.go:124] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0813 00:05:43.881089  826514 command_runner.go:124] > # The runtime to use is picked based on the runtime_handler provided by the CRI.
	I0813 00:05:43.881096  826514 command_runner.go:124] > # If no runtime_handler is provided, the runtime will be picked based on the level
	I0813 00:05:43.881105  826514 command_runner.go:124] > # of trust of the workload. Each entry in the table should follow the format:
	I0813 00:05:43.881110  826514 command_runner.go:124] > #
	I0813 00:05:43.881114  826514 command_runner.go:124] > #[crio.runtime.runtimes.runtime-handler]
	I0813 00:05:43.881119  826514 command_runner.go:124] > #  runtime_path = "/path/to/the/executable"
	I0813 00:05:43.881126  826514 command_runner.go:124] > #  runtime_type = "oci"
	I0813 00:05:43.881131  826514 command_runner.go:124] > #  runtime_root = "/path/to/the/root"
	I0813 00:05:43.881138  826514 command_runner.go:124] > #  privileged_without_host_devices = false
	I0813 00:05:43.881142  826514 command_runner.go:124] > #  allowed_annotations = []
	I0813 00:05:43.881146  826514 command_runner.go:124] > # Where:
	I0813 00:05:43.881152  826514 command_runner.go:124] > # - runtime-handler: name used to identify the runtime
	I0813 00:05:43.881161  826514 command_runner.go:124] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0813 00:05:43.881168  826514 command_runner.go:124] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0813 00:05:43.881178  826514 command_runner.go:124] > #   the runtime executable name, and the runtime executable should be placed
	I0813 00:05:43.881185  826514 command_runner.go:124] > #   in $PATH.
	I0813 00:05:43.881192  826514 command_runner.go:124] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0813 00:05:43.881203  826514 command_runner.go:124] > #   omitted, an "oci" runtime is assumed.
	I0813 00:05:43.881209  826514 command_runner.go:124] > # - runtime_root (optional, string): root directory for storage of containers
	I0813 00:05:43.881215  826514 command_runner.go:124] > #   state.
	I0813 00:05:43.881222  826514 command_runner.go:124] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0813 00:05:43.881231  826514 command_runner.go:124] > #   host devices from being passed to privileged containers.
	I0813 00:05:43.881238  826514 command_runner.go:124] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0813 00:05:43.881250  826514 command_runner.go:124] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0813 00:05:43.881263  826514 command_runner.go:124] > #   The currently recognized values are:
	I0813 00:05:43.881273  826514 command_runner.go:124] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0813 00:05:43.881280  826514 command_runner.go:124] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0813 00:05:43.881287  826514 command_runner.go:124] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0813 00:05:43.881292  826514 command_runner.go:124] > [crio.runtime.runtimes.runc]
	I0813 00:05:43.881297  826514 command_runner.go:124] > runtime_path = "/usr/bin/runc"
	I0813 00:05:43.881301  826514 command_runner.go:124] > runtime_type = "oci"
	I0813 00:05:43.881307  826514 command_runner.go:124] > runtime_root = "/run/runc"
	I0813 00:05:43.881315  826514 command_runner.go:124] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0813 00:05:43.881320  826514 command_runner.go:124] > # running containers
	I0813 00:05:43.881324  826514 command_runner.go:124] > #[crio.runtime.runtimes.crun]
	I0813 00:05:43.881332  826514 command_runner.go:124] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0813 00:05:43.881338  826514 command_runner.go:124] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0813 00:05:43.881345  826514 command_runner.go:124] > # surface and mitigating the consequences of containers breakout.
	I0813 00:05:43.881351  826514 command_runner.go:124] > # Kata Containers with the default configured VMM
	I0813 00:05:43.881356  826514 command_runner.go:124] > #[crio.runtime.runtimes.kata-runtime]
	I0813 00:05:43.881361  826514 command_runner.go:124] > # Kata Containers with the QEMU VMM
	I0813 00:05:43.881366  826514 command_runner.go:124] > #[crio.runtime.runtimes.kata-qemu]
	I0813 00:05:43.881371  826514 command_runner.go:124] > # Kata Containers with the Firecracker VMM
	I0813 00:05:43.881378  826514 command_runner.go:124] > #[crio.runtime.runtimes.kata-fc]
	I0813 00:05:43.881385  826514 command_runner.go:124] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0813 00:05:43.881388  826514 command_runner.go:124] > #
	I0813 00:05:43.881394  826514 command_runner.go:124] > # CRI-O reads its configured registries defaults from the system wide
	I0813 00:05:43.881403  826514 command_runner.go:124] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0813 00:05:43.881409  826514 command_runner.go:124] > # you want to modify just CRI-O, you can change the registries configuration in
	I0813 00:05:43.881418  826514 command_runner.go:124] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0813 00:05:43.881424  826514 command_runner.go:124] > # use the system's defaults from /etc/containers/registries.conf.
	I0813 00:05:43.881428  826514 command_runner.go:124] > [crio.image]
	I0813 00:05:43.881434  826514 command_runner.go:124] > # Default transport for pulling images from a remote container storage.
	I0813 00:05:43.881439  826514 command_runner.go:124] > default_transport = "docker://"
	I0813 00:05:43.881446  826514 command_runner.go:124] > # The path to a file containing credentials necessary for pulling images from
	I0813 00:05:43.881453  826514 command_runner.go:124] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0813 00:05:43.881457  826514 command_runner.go:124] > global_auth_file = ""
	I0813 00:05:43.881462  826514 command_runner.go:124] > # The image used to instantiate infra containers.
	I0813 00:05:43.881468  826514 command_runner.go:124] > # This option supports live configuration reload.
	I0813 00:05:43.881473  826514 command_runner.go:124] > pause_image = "k8s.gcr.io/pause:3.4.1"
	I0813 00:05:43.881481  826514 command_runner.go:124] > # The path to a file containing credentials specific for pulling the pause_image from
	I0813 00:05:43.881488  826514 command_runner.go:124] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0813 00:05:43.881495  826514 command_runner.go:124] > # This option supports live configuration reload.
	I0813 00:05:43.881499  826514 command_runner.go:124] > pause_image_auth_file = ""
	I0813 00:05:43.881506  826514 command_runner.go:124] > # The command to run to have a container stay in the paused state.
	I0813 00:05:43.881512  826514 command_runner.go:124] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0813 00:05:43.881519  826514 command_runner.go:124] > # specified in the pause image. When commented out, it will fallback to the
	I0813 00:05:43.881525  826514 command_runner.go:124] > # default: "/pause". This option supports live configuration reload.
	I0813 00:05:43.881530  826514 command_runner.go:124] > pause_command = "/pause"
	I0813 00:05:43.881537  826514 command_runner.go:124] > # Path to the file which decides what sort of policy we use when deciding
	I0813 00:05:43.881546  826514 command_runner.go:124] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0813 00:05:43.881552  826514 command_runner.go:124] > # this option be used, as the default behavior of using the system-wide default
	I0813 00:05:43.881560  826514 command_runner.go:124] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0813 00:05:43.881565  826514 command_runner.go:124] > # refer to containers-policy.json(5) for more details.
	I0813 00:05:43.881570  826514 command_runner.go:124] > signature_policy = ""
	I0813 00:05:43.881576  826514 command_runner.go:124] > # List of registries to skip TLS verification for pulling images. Please
	I0813 00:05:43.881585  826514 command_runner.go:124] > # consider configuring the registries via /etc/containers/registries.conf before
	I0813 00:05:43.881589  826514 command_runner.go:124] > # changing them here.
	I0813 00:05:43.881595  826514 command_runner.go:124] > #insecure_registries = "[]"
	I0813 00:05:43.881602  826514 command_runner.go:124] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0813 00:05:43.881609  826514 command_runner.go:124] > # ignore; the latter will ignore volumes entirely.
	I0813 00:05:43.881613  826514 command_runner.go:124] > image_volumes = "mkdir"
	I0813 00:05:43.881619  826514 command_runner.go:124] > # List of registries to be used when pulling an unqualified image (e.g.,
	I0813 00:05:43.881626  826514 command_runner.go:124] > # "alpine:latest"). By default, registries is set to "docker.io" for
	I0813 00:05:43.881633  826514 command_runner.go:124] > # compatibility reasons. Depending on your workload and usecase you may add more
	I0813 00:05:43.881642  826514 command_runner.go:124] > # registries (e.g., "quay.io", "registry.fedoraproject.org",
	I0813 00:05:43.881646  826514 command_runner.go:124] > # "registry.opensuse.org", etc.).
	I0813 00:05:43.881650  826514 command_runner.go:124] > #registries = [
	I0813 00:05:43.881654  826514 command_runner.go:124] > # 	"docker.io",
	I0813 00:05:43.881657  826514 command_runner.go:124] > #]
	I0813 00:05:43.881662  826514 command_runner.go:124] > # Temporary directory to use for storing big files
	I0813 00:05:43.881668  826514 command_runner.go:124] > big_files_temporary_dir = ""
	I0813 00:05:43.881674  826514 command_runner.go:124] > # The crio.network table containers settings pertaining to the management of
	I0813 00:05:43.881682  826514 command_runner.go:124] > # CNI plugins.
	I0813 00:05:43.881686  826514 command_runner.go:124] > [crio.network]
	I0813 00:05:43.881697  826514 command_runner.go:124] > # The default CNI network name to be selected. If not set or "", then
	I0813 00:05:43.881703  826514 command_runner.go:124] > # CRI-O will pick-up the first one found in network_dir.
	I0813 00:05:43.881708  826514 command_runner.go:124] > # cni_default_network = "kindnet"
	I0813 00:05:43.881714  826514 command_runner.go:124] > # Path to the directory where CNI configuration files are located.
	I0813 00:05:43.881718  826514 command_runner.go:124] > network_dir = "/etc/cni/net.d/"
	I0813 00:05:43.881726  826514 command_runner.go:124] > # Paths to directories where CNI plugin binaries are located.
	I0813 00:05:43.881731  826514 command_runner.go:124] > plugin_dirs = [
	I0813 00:05:43.881735  826514 command_runner.go:124] > 	"/opt/cni/bin/",
	I0813 00:05:43.881738  826514 command_runner.go:124] > ]
	I0813 00:05:43.881744  826514 command_runner.go:124] > # A necessary configuration for Prometheus based metrics retrieval
	I0813 00:05:43.881751  826514 command_runner.go:124] > [crio.metrics]
	I0813 00:05:43.881756  826514 command_runner.go:124] > # Globally enable or disable metrics support.
	I0813 00:05:43.881759  826514 command_runner.go:124] > enable_metrics = true
	I0813 00:05:43.881766  826514 command_runner.go:124] > # The port on which the metrics server will listen.
	I0813 00:05:43.881770  826514 command_runner.go:124] > metrics_port = 9090
	I0813 00:05:43.881811  826514 command_runner.go:124] > # Local socket path to bind the metrics server to
	I0813 00:05:43.881819  826514 command_runner.go:124] > metrics_socket = ""
	I0813 00:05:43.881887  826514 cni.go:93] Creating CNI manager for ""
	I0813 00:05:43.881896  826514 cni.go:154] 2 nodes found, recommending kindnet
	I0813 00:05:43.881906  826514 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 00:05:43.881919  826514 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.152 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-20210813000359-820289 NodeName:multinode-20210813000359-820289-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.22"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.39.152 CgroupDriver:systemd ClientC
AFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 00:05:43.882032  826514 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.152
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "multinode-20210813000359-820289-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.152
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.22"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 00:05:43.882100  826514 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --hostname-override=multinode-20210813000359-820289-m02 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.152 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:multinode-20210813000359-820289 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 00:05:43.882145  826514 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 00:05:43.889210  826514 command_runner.go:124] > kubeadm
	I0813 00:05:43.889221  826514 command_runner.go:124] > kubectl
	I0813 00:05:43.889225  826514 command_runner.go:124] > kubelet
	I0813 00:05:43.889601  826514 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 00:05:43.889657  826514 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0813 00:05:43.896461  826514 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (516 bytes)
	I0813 00:05:43.907373  826514 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 00:05:43.918272  826514 ssh_runner.go:149] Run: grep 192.168.39.22	control-plane.minikube.internal$ /etc/hosts
	I0813 00:05:43.921976  826514 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.22	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 00:05:43.932277  826514 host.go:66] Checking if "multinode-20210813000359-820289" exists ...
	I0813 00:05:43.932613  826514 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 00:05:43.932647  826514 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 00:05:43.943596  826514 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37109
	I0813 00:05:43.944060  826514 main.go:130] libmachine: () Calling .GetVersion
	I0813 00:05:43.944545  826514 main.go:130] libmachine: Using API Version  1
	I0813 00:05:43.944568  826514 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 00:05:43.944901  826514 main.go:130] libmachine: () Calling .GetMachineName
	I0813 00:05:43.945107  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .DriverName
	I0813 00:05:43.945221  826514 start.go:241] JoinCluster: &{Name:multinode-20210813000359-820289 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12122/minikube-v1.22.0-1628238775-12122.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:multinode-20
210813000359-820289 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.22 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.152 Port:0 KubernetesVersion:v1.21.3 ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0}
	I0813 00:05:43.945312  826514 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm token create --print-join-command --ttl=0"
	I0813 00:05:43.945327  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHHostname
	I0813 00:05:43.950486  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:05:43.950875  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e4:55", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:04:13 +0000 UTC Type:0 Mac:52:54:00:b5:e4:55 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:multinode-20210813000359-820289 Clientid:01:52:54:00:b5:e4:55}
	I0813 00:05:43.950907  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined IP address 192.168.39.22 and MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:05:43.951073  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHPort
	I0813 00:05:43.951237  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHKeyPath
	I0813 00:05:43.951421  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHUsername
	I0813 00:05:43.951570  826514 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813000359-820289/id_rsa Username:docker}
	I0813 00:05:44.148226  826514 command_runner.go:124] > kubeadm join control-plane.minikube.internal:8443 --token sgcbtd.wvx1ip84hjdcplgt --discovery-token-ca-cert-hash sha256:a2926d5accd9a2d1096d4e62979978bfd5a94255856b68d015a34969efd36535 
	I0813 00:05:44.151001  826514 start.go:262] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.152 Port:0 KubernetesVersion:v1.21.3 ControlPlane:false Worker:true}
	I0813 00:05:44.151048  826514 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm join control-plane.minikube.internal:8443 --token sgcbtd.wvx1ip84hjdcplgt --discovery-token-ca-cert-hash sha256:a2926d5accd9a2d1096d4e62979978bfd5a94255856b68d015a34969efd36535 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-20210813000359-820289-m02"
	I0813 00:05:44.270484  826514 command_runner.go:124] > [preflight] Running pre-flight checks
	I0813 00:05:44.590024  826514 command_runner.go:124] > [preflight] Reading configuration from the cluster...
	I0813 00:05:44.590068  826514 command_runner.go:124] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0813 00:05:44.654543  826514 command_runner.go:124] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0813 00:05:44.655134  826514 command_runner.go:124] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0813 00:05:44.655195  826514 command_runner.go:124] > [kubelet-start] Starting the kubelet
	I0813 00:05:44.847085  826514 command_runner.go:124] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0813 00:05:50.981570  826514 command_runner.go:124] > This node has joined the cluster:
	I0813 00:05:50.981609  826514 command_runner.go:124] > * Certificate signing request was sent to apiserver and a response was received.
	I0813 00:05:50.981619  826514 command_runner.go:124] > * The Kubelet was informed of the new secure connection details.
	I0813 00:05:50.981631  826514 command_runner.go:124] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0813 00:05:50.983121  826514 command_runner.go:124] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0813 00:05:50.983155  826514 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm join control-plane.minikube.internal:8443 --token sgcbtd.wvx1ip84hjdcplgt --discovery-token-ca-cert-hash sha256:a2926d5accd9a2d1096d4e62979978bfd5a94255856b68d015a34969efd36535 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-20210813000359-820289-m02": (6.83208802s)
	I0813 00:05:50.983183  826514 ssh_runner.go:149] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0813 00:05:51.339797  826514 command_runner.go:124] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0813 00:05:51.339946  826514 start.go:243] JoinCluster complete in 7.394719912s
	I0813 00:05:51.339976  826514 cni.go:93] Creating CNI manager for ""
	I0813 00:05:51.339984  826514 cni.go:154] 2 nodes found, recommending kindnet
	I0813 00:05:51.340055  826514 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0813 00:05:51.345313  826514 command_runner.go:124] >   File: /opt/cni/bin/portmap
	I0813 00:05:51.345337  826514 command_runner.go:124] >   Size: 2853400   	Blocks: 5576       IO Block: 4096   regular file
	I0813 00:05:51.345347  826514 command_runner.go:124] > Device: 10h/16d	Inode: 22646       Links: 1
	I0813 00:05:51.345363  826514 command_runner.go:124] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0813 00:05:51.345371  826514 command_runner.go:124] > Access: 2021-08-13 00:04:13.266164804 +0000
	I0813 00:05:51.345380  826514 command_runner.go:124] > Modify: 2021-08-06 09:23:24.000000000 +0000
	I0813 00:05:51.345387  826514 command_runner.go:124] > Change: 2021-08-13 00:04:09.548164804 +0000
	I0813 00:05:51.345394  826514 command_runner.go:124] >  Birth: -
	I0813 00:05:51.345719  826514 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0813 00:05:51.345734  826514 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0813 00:05:51.361711  826514 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0813 00:05:51.683551  826514 command_runner.go:124] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0813 00:05:51.685943  826514 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0813 00:05:51.689157  826514 command_runner.go:124] > serviceaccount/kindnet unchanged
	I0813 00:05:51.709844  826514 command_runner.go:124] > daemonset.apps/kindnet configured
	I0813 00:05:51.712026  826514 start.go:226] Will wait 6m0s for node &{Name:m02 IP:192.168.39.152 Port:0 KubernetesVersion:v1.21.3 ControlPlane:false Worker:true}
	I0813 00:05:51.714238  826514 out.go:177] * Verifying Kubernetes components...
	I0813 00:05:51.714326  826514 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 00:05:51.725966  826514 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0813 00:05:51.726187  826514 kapi.go:59] client config for multinode-20210813000359-820289: &rest.Config{Host:"https://192.168.39.22:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359
-820289/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 00:05:51.727523  826514 node_ready.go:35] waiting up to 6m0s for node "multinode-20210813000359-820289-m02" to be "Ready" ...
	I0813 00:05:51.727594  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289-m02
	I0813 00:05:51.727602  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:51.727608  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:51.727612  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:51.729926  826514 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:05:51.729943  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:51.729949  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:51.729953  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:51.729958  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:51.729962  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:51.729966  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:51 GMT
	I0813 00:05:51.730254  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289-m02","uid":"cde91947-f0b9-43a6-9224-431ab64254d1","resourceVersion":"558","creationTimestamp":"2021-08-13T00:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"
v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{" [truncated 5722 chars]
	I0813 00:05:51.730504  826514 node_ready.go:49] node "multinode-20210813000359-820289-m02" has status "Ready":"True"
	I0813 00:05:51.730516  826514 node_ready.go:38] duration metric: took 2.972685ms waiting for node "multinode-20210813000359-820289-m02" to be "Ready" ...
	I0813 00:05:51.730526  826514 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 00:05:51.730589  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods
	I0813 00:05:51.730599  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:51.730606  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:51.730612  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:51.737725  826514 round_trippers.go:457] Response Status: 200 OK in 7 milliseconds
	I0813 00:05:51.737744  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:51.737751  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:51.737755  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:51.737760  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:51.737764  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:51.737769  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:51 GMT
	I0813 00:05:51.740103  826514 request.go:1123] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"559"},"items":[{"metadata":{"name":"coredns-558bd4d5db-sstrb","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"16f6c77d-26a2-47e7-9c19-74736961cc13","resourceVersion":"480","creationTimestamp":"2021-08-13T00:05:06Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"65fbbea7-e9b5-486b-a329-69ff09468fc0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65fbbea7-e9b5-486b-a329-69ff09468fc0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller"
:{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k: [truncated 63733 chars]
	I0813 00:05:51.741507  826514 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-sstrb" in "kube-system" namespace to be "Ready" ...
	I0813 00:05:51.741589  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-sstrb
	I0813 00:05:51.741598  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:51.741603  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:51.741607  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:51.743554  826514 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:05:51.743570  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:51.743576  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:51 GMT
	I0813 00:05:51.743581  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:51.743586  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:51.743590  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:51.743594  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:51.743799  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-sstrb","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"16f6c77d-26a2-47e7-9c19-74736961cc13","resourceVersion":"480","creationTimestamp":"2021-08-13T00:05:06Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"65fbbea7-e9b5-486b-a329-69ff09468fc0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65fbbea7-e9b5-486b-a329-69ff09468fc0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5734 chars]
	I0813 00:05:51.744173  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289
	I0813 00:05:51.744188  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:51.744195  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:51.744202  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:51.745944  826514 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:05:51.745953  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:51.745958  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:51.745964  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:51 GMT
	I0813 00:05:51.745968  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:51.745973  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:51.745977  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:51.746186  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","resourceVersion":"378","creationTimestamp":"2021-08-13T00:04:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813000359-820289","minikube.k8s.io/updated_at":"2021_08_13T00_04_54_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6557 chars]
	I0813 00:05:51.746456  826514 pod_ready.go:92] pod "coredns-558bd4d5db-sstrb" in "kube-system" namespace has status "Ready":"True"
	I0813 00:05:51.746468  826514 pod_ready.go:81] duration metric: took 4.942023ms waiting for pod "coredns-558bd4d5db-sstrb" in "kube-system" namespace to be "Ready" ...
	I0813 00:05:51.746476  826514 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-20210813000359-820289" in "kube-system" namespace to be "Ready" ...
	I0813 00:05:51.746532  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210813000359-820289
	I0813 00:05:51.746543  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:51.746550  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:51.746556  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:51.748676  826514 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:05:51.748684  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:51.748691  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:51.748694  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:51 GMT
	I0813 00:05:51.748697  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:51.748703  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:51.748706  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:51.748850  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210813000359-820289","namespace":"kube-system","uid":"2d8ff24a-3267-4d8b-a528-3da3d3b70e54","resourceVersion":"330","creationTimestamp":"2021-08-13T00:04:59Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.22:2379","kubernetes.io/config.hash":"bc4647bddd439c8d0983a3b358a72513","kubernetes.io/config.mirror":"bc4647bddd439c8d0983a3b358a72513","kubernetes.io/config.seen":"2021-08-13T00:04:59.185501301Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.
kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.ha [truncated 5574 chars]
	I0813 00:05:51.749128  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289
	I0813 00:05:51.749141  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:51.749146  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:51.749149  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:51.750745  826514 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:05:51.750761  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:51.750768  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:51.750773  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:51 GMT
	I0813 00:05:51.750778  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:51.750782  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:51.750786  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:51.750970  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","resourceVersion":"378","creationTimestamp":"2021-08-13T00:04:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813000359-820289","minikube.k8s.io/updated_at":"2021_08_13T00_04_54_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6557 chars]
	I0813 00:05:51.751255  826514 pod_ready.go:92] pod "etcd-multinode-20210813000359-820289" in "kube-system" namespace has status "Ready":"True"
	I0813 00:05:51.751274  826514 pod_ready.go:81] duration metric: took 4.790813ms waiting for pod "etcd-multinode-20210813000359-820289" in "kube-system" namespace to be "Ready" ...
	I0813 00:05:51.751293  826514 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-20210813000359-820289" in "kube-system" namespace to be "Ready" ...
	I0813 00:05:51.751353  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20210813000359-820289
	I0813 00:05:51.751365  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:51.751372  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:51.751378  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:51.753644  826514 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:05:51.753654  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:51.753658  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:51.753661  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:51.753664  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:51.753667  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:51.753670  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:51 GMT
	I0813 00:05:51.753845  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20210813000359-820289","namespace":"kube-system","uid":"b5954b4a-9e51-488b-a0fa-cacb7de86621","resourceVersion":"450","creationTimestamp":"2021-08-13T00:04:59Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.22:8443","kubernetes.io/config.hash":"e0dcc263218298eb0bc9dd91ad6c2c6d","kubernetes.io/config.mirror":"e0dcc263218298eb0bc9dd91ad6c2c6d","kubernetes.io/config.seen":"2021-08-13T00:04:59.185603315Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-addre [truncated 7252 chars]
	I0813 00:05:51.754159  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289
	I0813 00:05:51.754172  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:51.754177  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:51.754180  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:51.756001  826514 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:05:51.756019  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:51.756025  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:51 GMT
	I0813 00:05:51.756030  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:51.756034  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:51.756038  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:51.756045  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:51.756507  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","resourceVersion":"378","creationTimestamp":"2021-08-13T00:04:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813000359-820289","minikube.k8s.io/updated_at":"2021_08_13T00_04_54_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6557 chars]
	I0813 00:05:51.756781  826514 pod_ready.go:92] pod "kube-apiserver-multinode-20210813000359-820289" in "kube-system" namespace has status "Ready":"True"
	I0813 00:05:51.756796  826514 pod_ready.go:81] duration metric: took 5.493253ms waiting for pod "kube-apiserver-multinode-20210813000359-820289" in "kube-system" namespace to be "Ready" ...
	I0813 00:05:51.756807  826514 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-20210813000359-820289" in "kube-system" namespace to be "Ready" ...
	I0813 00:05:51.756862  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210813000359-820289
	I0813 00:05:51.756873  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:51.756879  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:51.756885  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:51.758693  826514 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:05:51.758708  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:51.758711  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:51.758714  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:51.758717  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:51.758720  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:51.758723  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:51 GMT
	I0813 00:05:51.758893  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20210813000359-820289","namespace":"kube-system","uid":"f25a529b-df04-44a7-aa11-5f04f8acaaf9","resourceVersion":"452","creationTimestamp":"2021-08-13T00:04:53Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8e7d0bc335d72432dc6bd22d4541dfbd","kubernetes.io/config.mirror":"8e7d0bc335d72432dc6bd22d4541dfbd","kubernetes.io/config.seen":"2021-08-13T00:04:42.246742645Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con
fig.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config [truncated 6813 chars]
	I0813 00:05:51.759159  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289
	I0813 00:05:51.759171  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:51.759175  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:51.759180  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:51.761299  826514 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:05:51.761309  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:51.761312  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:51 GMT
	I0813 00:05:51.761315  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:51.761320  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:51.761325  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:51.761328  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:51.761468  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","resourceVersion":"378","creationTimestamp":"2021-08-13T00:04:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813000359-820289","minikube.k8s.io/updated_at":"2021_08_13T00_04_54_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6557 chars]
	I0813 00:05:51.761768  826514 pod_ready.go:92] pod "kube-controller-manager-multinode-20210813000359-820289" in "kube-system" namespace has status "Ready":"True"
	I0813 00:05:51.761779  826514 pod_ready.go:81] duration metric: took 4.964506ms waiting for pod "kube-controller-manager-multinode-20210813000359-820289" in "kube-system" namespace to be "Ready" ...
	I0813 00:05:51.761789  826514 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8h4t8" in "kube-system" namespace to be "Ready" ...
	I0813 00:05:51.928175  826514 request.go:600] Waited for 166.312095ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8h4t8
	I0813 00:05:51.928236  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8h4t8
	I0813 00:05:51.928242  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:51.928248  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:51.928252  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:51.931345  826514 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 00:05:51.931364  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:51.931370  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:51.931375  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:51.931379  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:51.931384  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:51 GMT
	I0813 00:05:51.931389  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:51.931529  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8h4t8","generateName":"kube-proxy-","namespace":"kube-system","uid":"414dc4de-68ad-43b2-9847-aa0917e6d69d","resourceVersion":"547","creationTimestamp":"2021-08-13T00:05:50Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e61f546c-a1b1-4412-af08-7c2ebe78d772","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e61f546c-a1b1-4412-af08-7c2ebe78d772\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 4297 chars]
	I0813 00:05:52.128272  826514 request.go:600] Waited for 196.347658ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289-m02
	I0813 00:05:52.128331  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289-m02
	I0813 00:05:52.128338  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:52.128346  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:52.128352  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:52.131528  826514 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 00:05:52.131547  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:52.131551  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:52.131555  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:52 GMT
	I0813 00:05:52.131558  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:52.131564  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:52.131570  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:52.131753  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289-m02","uid":"cde91947-f0b9-43a6-9224-431ab64254d1","resourceVersion":"558","creationTimestamp":"2021-08-13T00:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"
v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{" [truncated 5722 chars]
	I0813 00:05:52.632912  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8h4t8
	I0813 00:05:52.632944  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:52.632952  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:52.632957  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:52.635329  826514 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:05:52.635350  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:52.635356  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:52.635361  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:52.635368  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:52.635372  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:52.635377  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:52 GMT
	I0813 00:05:52.635523  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8h4t8","generateName":"kube-proxy-","namespace":"kube-system","uid":"414dc4de-68ad-43b2-9847-aa0917e6d69d","resourceVersion":"547","creationTimestamp":"2021-08-13T00:05:50Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e61f546c-a1b1-4412-af08-7c2ebe78d772","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e61f546c-a1b1-4412-af08-7c2ebe78d772\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 4297 chars]
	I0813 00:05:52.635890  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289-m02
	I0813 00:05:52.635908  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:52.635916  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:52.635921  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:52.637983  826514 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:05:52.638007  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:52.638013  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:52 GMT
	I0813 00:05:52.638018  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:52.638023  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:52.638027  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:52.638031  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:52.638234  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289-m02","uid":"cde91947-f0b9-43a6-9224-431ab64254d1","resourceVersion":"558","creationTimestamp":"2021-08-13T00:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"
v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{" [truncated 5722 chars]
	I0813 00:05:53.133010  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8h4t8
	I0813 00:05:53.133038  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:53.133044  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:53.133048  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:53.135834  826514 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:05:53.135853  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:53.135859  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:53.135864  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:53 GMT
	I0813 00:05:53.135869  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:53.135873  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:53.135877  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:53.136279  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8h4t8","generateName":"kube-proxy-","namespace":"kube-system","uid":"414dc4de-68ad-43b2-9847-aa0917e6d69d","resourceVersion":"562","creationTimestamp":"2021-08-13T00:05:50Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e61f546c-a1b1-4412-af08-7c2ebe78d772","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e61f546c-a1b1-4412-af08-7c2ebe78d772\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 5807 chars]
	I0813 00:05:53.136602  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289-m02
	I0813 00:05:53.136615  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:53.136620  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:53.136624  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:53.139811  826514 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 00:05:53.139828  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:53.139833  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:53.139838  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:53.139842  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:53.139847  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:53.139852  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:53 GMT
	I0813 00:05:53.140290  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289-m02","uid":"cde91947-f0b9-43a6-9224-431ab64254d1","resourceVersion":"558","creationTimestamp":"2021-08-13T00:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"
v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{" [truncated 5722 chars]
	I0813 00:05:53.633013  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8h4t8
	I0813 00:05:53.633037  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:53.633049  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:53.633053  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:53.639537  826514 round_trippers.go:457] Response Status: 200 OK in 6 milliseconds
	I0813 00:05:53.639556  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:53.639561  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:53.639564  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:53.639567  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:53.639570  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:53.639573  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:53 GMT
	I0813 00:05:53.639680  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8h4t8","generateName":"kube-proxy-","namespace":"kube-system","uid":"414dc4de-68ad-43b2-9847-aa0917e6d69d","resourceVersion":"562","creationTimestamp":"2021-08-13T00:05:50Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e61f546c-a1b1-4412-af08-7c2ebe78d772","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e61f546c-a1b1-4412-af08-7c2ebe78d772\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 5807 chars]
	I0813 00:05:53.640027  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289-m02
	I0813 00:05:53.640042  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:53.640047  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:53.640051  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:53.642477  826514 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:05:53.642492  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:53.642497  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:53.642502  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:53.642506  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:53 GMT
	I0813 00:05:53.642511  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:53.642516  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:53.642705  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289-m02","uid":"cde91947-f0b9-43a6-9224-431ab64254d1","resourceVersion":"558","creationTimestamp":"2021-08-13T00:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"
v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{" [truncated 5722 chars]
	I0813 00:05:54.132933  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8h4t8
	I0813 00:05:54.132959  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:54.132965  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:54.132969  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:54.136486  826514 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 00:05:54.136505  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:54.136511  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:54.136516  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:54.136520  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:54.136524  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:54.136528  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:54 GMT
	I0813 00:05:54.136745  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8h4t8","generateName":"kube-proxy-","namespace":"kube-system","uid":"414dc4de-68ad-43b2-9847-aa0917e6d69d","resourceVersion":"562","creationTimestamp":"2021-08-13T00:05:50Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e61f546c-a1b1-4412-af08-7c2ebe78d772","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e61f546c-a1b1-4412-af08-7c2ebe78d772\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 5807 chars]
	I0813 00:05:54.137188  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289-m02
	I0813 00:05:54.137209  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:54.137214  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:54.137218  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:54.142622  826514 round_trippers.go:457] Response Status: 200 OK in 5 milliseconds
	I0813 00:05:54.142636  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:54.142641  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:54.142646  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:54.142649  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:54.142653  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:54.142657  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:54 GMT
	I0813 00:05:54.142896  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289-m02","uid":"cde91947-f0b9-43a6-9224-431ab64254d1","resourceVersion":"558","creationTimestamp":"2021-08-13T00:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"
v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{" [truncated 5722 chars]
	I0813 00:05:54.143117  826514 pod_ready.go:102] pod "kube-proxy-8h4t8" in "kube-system" namespace has status "Ready":"False"
	I0813 00:05:54.633164  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8h4t8
	I0813 00:05:54.633185  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:54.633194  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:54.633198  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:54.636463  826514 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 00:05:54.636481  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:54.636485  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:54.636489  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:54.636492  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:54.636495  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:54.636498  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:54 GMT
	I0813 00:05:54.637004  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8h4t8","generateName":"kube-proxy-","namespace":"kube-system","uid":"414dc4de-68ad-43b2-9847-aa0917e6d69d","resourceVersion":"562","creationTimestamp":"2021-08-13T00:05:50Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e61f546c-a1b1-4412-af08-7c2ebe78d772","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e61f546c-a1b1-4412-af08-7c2ebe78d772\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 5807 chars]
	I0813 00:05:54.637399  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289-m02
	I0813 00:05:54.637413  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:54.637418  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:54.637422  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:54.639553  826514 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:05:54.639565  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:54.639571  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:54.639576  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:54.639580  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:54.639585  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:54 GMT
	I0813 00:05:54.639590  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:54.639782  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289-m02","uid":"cde91947-f0b9-43a6-9224-431ab64254d1","resourceVersion":"558","creationTimestamp":"2021-08-13T00:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"
v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{" [truncated 5722 chars]
	I0813 00:05:55.132441  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8h4t8
	I0813 00:05:55.132468  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:55.132474  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:55.132478  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:55.135768  826514 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 00:05:55.135787  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:55.135792  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:55.135795  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:55.135798  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:55.135804  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:55.135807  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:55 GMT
	I0813 00:05:55.135897  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8h4t8","generateName":"kube-proxy-","namespace":"kube-system","uid":"414dc4de-68ad-43b2-9847-aa0917e6d69d","resourceVersion":"562","creationTimestamp":"2021-08-13T00:05:50Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e61f546c-a1b1-4412-af08-7c2ebe78d772","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e61f546c-a1b1-4412-af08-7c2ebe78d772\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 5807 chars]
	I0813 00:05:55.136231  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289-m02
	I0813 00:05:55.136245  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:55.136250  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:55.136254  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:55.139466  826514 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 00:05:55.139486  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:55.139493  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:55.139497  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:55 GMT
	I0813 00:05:55.139502  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:55.139507  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:55.139514  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:55.139645  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289-m02","uid":"cde91947-f0b9-43a6-9224-431ab64254d1","resourceVersion":"558","creationTimestamp":"2021-08-13T00:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"
v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{" [truncated 5722 chars]
	I0813 00:05:55.633008  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8h4t8
	I0813 00:05:55.633033  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:55.633039  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:55.633044  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:55.638091  826514 round_trippers.go:457] Response Status: 200 OK in 5 milliseconds
	I0813 00:05:55.638153  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:55.638167  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:55 GMT
	I0813 00:05:55.638173  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:55.638179  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:55.638185  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:55.638191  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:55.639141  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8h4t8","generateName":"kube-proxy-","namespace":"kube-system","uid":"414dc4de-68ad-43b2-9847-aa0917e6d69d","resourceVersion":"562","creationTimestamp":"2021-08-13T00:05:50Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e61f546c-a1b1-4412-af08-7c2ebe78d772","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e61f546c-a1b1-4412-af08-7c2ebe78d772\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 5807 chars]
	I0813 00:05:55.639505  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289-m02
	I0813 00:05:55.639519  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:55.639524  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:55.639528  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:55.641735  826514 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:05:55.641755  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:55.641762  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:55.641767  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:55.641771  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:55.641775  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:55.641780  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:55 GMT
	I0813 00:05:55.641954  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289-m02","uid":"cde91947-f0b9-43a6-9224-431ab64254d1","resourceVersion":"558","creationTimestamp":"2021-08-13T00:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"
v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{" [truncated 5722 chars]
	I0813 00:05:56.132627  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8h4t8
	I0813 00:05:56.132653  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:56.132659  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:56.132663  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:56.136208  826514 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 00:05:56.136270  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:56.136286  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:56.136292  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:56.136297  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:56.136302  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:56.136306  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:56 GMT
	I0813 00:05:56.136457  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8h4t8","generateName":"kube-proxy-","namespace":"kube-system","uid":"414dc4de-68ad-43b2-9847-aa0917e6d69d","resourceVersion":"562","creationTimestamp":"2021-08-13T00:05:50Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e61f546c-a1b1-4412-af08-7c2ebe78d772","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e61f546c-a1b1-4412-af08-7c2ebe78d772\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 5807 chars]
	I0813 00:05:56.136835  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289-m02
	I0813 00:05:56.136851  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:56.136857  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:56.136863  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:56.141514  826514 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0813 00:05:56.141531  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:56.141537  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:56 GMT
	I0813 00:05:56.141542  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:56.141546  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:56.141550  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:56.141554  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:56.141962  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289-m02","uid":"cde91947-f0b9-43a6-9224-431ab64254d1","resourceVersion":"565","creationTimestamp":"2021-08-13T00:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"20
21-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{" [truncated 5602 chars]
	I0813 00:05:56.633288  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8h4t8
	I0813 00:05:56.633311  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:56.633316  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:56.633320  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:56.642768  826514 round_trippers.go:457] Response Status: 200 OK in 9 milliseconds
	I0813 00:05:56.642789  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:56.642794  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:56.642797  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:56.642801  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:56 GMT
	I0813 00:05:56.642804  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:56.642807  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:56.643101  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8h4t8","generateName":"kube-proxy-","namespace":"kube-system","uid":"414dc4de-68ad-43b2-9847-aa0917e6d69d","resourceVersion":"562","creationTimestamp":"2021-08-13T00:05:50Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e61f546c-a1b1-4412-af08-7c2ebe78d772","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e61f546c-a1b1-4412-af08-7c2ebe78d772\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 5807 chars]
	I0813 00:05:56.643464  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289-m02
	I0813 00:05:56.643479  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:56.643484  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:56.643488  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:56.645990  826514 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:05:56.646007  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:56.646013  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:56 GMT
	I0813 00:05:56.646016  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:56.646020  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:56.646026  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:56.646030  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:56.646340  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289-m02","uid":"cde91947-f0b9-43a6-9224-431ab64254d1","resourceVersion":"565","creationTimestamp":"2021-08-13T00:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"20
21-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{" [truncated 5602 chars]
	I0813 00:05:56.646717  826514 pod_ready.go:102] pod "kube-proxy-8h4t8" in "kube-system" namespace has status "Ready":"False"
	I0813 00:05:57.132773  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8h4t8
	I0813 00:05:57.132820  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:57.132830  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:57.132837  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:57.137480  826514 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0813 00:05:57.137503  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:57.137509  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:57.137512  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:57.137516  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:57 GMT
	I0813 00:05:57.137519  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:57.137525  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:57.137905  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8h4t8","generateName":"kube-proxy-","namespace":"kube-system","uid":"414dc4de-68ad-43b2-9847-aa0917e6d69d","resourceVersion":"562","creationTimestamp":"2021-08-13T00:05:50Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e61f546c-a1b1-4412-af08-7c2ebe78d772","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e61f546c-a1b1-4412-af08-7c2ebe78d772\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 5807 chars]
	I0813 00:05:57.138258  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289-m02
	I0813 00:05:57.138273  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:57.138280  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:57.138286  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:57.141905  826514 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 00:05:57.141918  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:57.141922  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:57.141926  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:57.141929  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:57.141932  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:57.141936  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:57 GMT
	I0813 00:05:57.142359  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289-m02","uid":"cde91947-f0b9-43a6-9224-431ab64254d1","resourceVersion":"565","creationTimestamp":"2021-08-13T00:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"20
21-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{" [truncated 5602 chars]
	I0813 00:05:57.633116  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8h4t8
	I0813 00:05:57.633147  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:57.633152  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:57.633156  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:57.635479  826514 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:05:57.635499  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:57.635504  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:57.635507  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:57.635510  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:57.635513  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:57 GMT
	I0813 00:05:57.635516  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:57.635841  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8h4t8","generateName":"kube-proxy-","namespace":"kube-system","uid":"414dc4de-68ad-43b2-9847-aa0917e6d69d","resourceVersion":"562","creationTimestamp":"2021-08-13T00:05:50Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e61f546c-a1b1-4412-af08-7c2ebe78d772","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e61f546c-a1b1-4412-af08-7c2ebe78d772\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 5807 chars]
	I0813 00:05:57.636190  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289-m02
	I0813 00:05:57.636206  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:57.636212  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:57.636216  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:57.639337  826514 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 00:05:57.639349  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:57.639353  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:57.639355  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:57.639358  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:57.639361  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:57.639368  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:57 GMT
	I0813 00:05:57.639821  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289-m02","uid":"cde91947-f0b9-43a6-9224-431ab64254d1","resourceVersion":"565","creationTimestamp":"2021-08-13T00:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"20
21-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{" [truncated 5602 chars]
	I0813 00:05:58.132484  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8h4t8
	I0813 00:05:58.132508  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:58.132515  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:58.132520  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:58.136086  826514 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 00:05:58.136105  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:58.136111  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:58.136116  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:58.136120  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:58.136124  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:58.136128  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:58 GMT
	I0813 00:05:58.136519  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8h4t8","generateName":"kube-proxy-","namespace":"kube-system","uid":"414dc4de-68ad-43b2-9847-aa0917e6d69d","resourceVersion":"571","creationTimestamp":"2021-08-13T00:05:50Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e61f546c-a1b1-4412-af08-7c2ebe78d772","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e61f546c-a1b1-4412-af08-7c2ebe78d772\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 5772 chars]
	I0813 00:05:58.136897  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289-m02
	I0813 00:05:58.136918  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:58.136925  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:58.136931  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:58.140061  826514 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 00:05:58.140080  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:58.140087  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:58.140092  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:58.140096  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:58.140101  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:58 GMT
	I0813 00:05:58.140105  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:58.140501  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289-m02","uid":"cde91947-f0b9-43a6-9224-431ab64254d1","resourceVersion":"565","creationTimestamp":"2021-08-13T00:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"20
21-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{" [truncated 5602 chars]
	I0813 00:05:58.140755  826514 pod_ready.go:92] pod "kube-proxy-8h4t8" in "kube-system" namespace has status "Ready":"True"
	I0813 00:05:58.140774  826514 pod_ready.go:81] duration metric: took 6.37897864s waiting for pod "kube-proxy-8h4t8" in "kube-system" namespace to be "Ready" ...
	I0813 00:05:58.140783  826514 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tvtvh" in "kube-system" namespace to be "Ready" ...
	I0813 00:05:58.140847  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tvtvh
	I0813 00:05:58.140859  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:58.140864  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:58.140868  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:58.142647  826514 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:05:58.142663  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:58.142669  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:58.142674  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:58.142678  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:58.142683  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:58.142691  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:58 GMT
	I0813 00:05:58.142981  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tvtvh","generateName":"kube-proxy-","namespace":"kube-system","uid":"7b54ef6d-43f6-4dc1-b3be-c5fb1b57a108","resourceVersion":"476","creationTimestamp":"2021-08-13T00:05:06Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e61f546c-a1b1-4412-af08-7c2ebe78d772","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e61f546c-a1b1-4412-af08-7c2ebe78d772\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 5760 chars]
	I0813 00:05:58.143355  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289
	I0813 00:05:58.143372  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:58.143380  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:58.143386  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:58.146307  826514 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:05:58.146320  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:58.146325  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:58.146328  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:58.146331  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:58.146334  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:58.146337  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:58 GMT
	I0813 00:05:58.147204  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","resourceVersion":"378","creationTimestamp":"2021-08-13T00:04:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813000359-820289","minikube.k8s.io/updated_at":"2021_08_13T00_04_54_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6557 chars]
	I0813 00:05:58.147476  826514 pod_ready.go:92] pod "kube-proxy-tvtvh" in "kube-system" namespace has status "Ready":"True"
	I0813 00:05:58.147485  826514 pod_ready.go:81] duration metric: took 6.694603ms waiting for pod "kube-proxy-tvtvh" in "kube-system" namespace to be "Ready" ...
	I0813 00:05:58.147494  826514 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-20210813000359-820289" in "kube-system" namespace to be "Ready" ...
	I0813 00:05:58.147552  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210813000359-820289
	I0813 00:05:58.147560  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:58.147566  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:58.147572  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:58.150664  826514 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 00:05:58.150677  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:58.150682  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:58.150686  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:58.150691  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:58.150696  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:58 GMT
	I0813 00:05:58.150700  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:58.151607  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20210813000359-820289","namespace":"kube-system","uid":"f92e79ae-a806-4356-8c4f-e58f5355dac5","resourceVersion":"328","creationTimestamp":"2021-08-13T00:04:59Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"113dba97bad3e83d4c789adae2059392","kubernetes.io/config.mirror":"113dba97bad3e83d4c789adae2059392","kubernetes.io/config.seen":"2021-08-13T00:04:59.185608489Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:
kubernetes.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:la [truncated 4543 chars]
	I0813 00:05:58.151916  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289
	I0813 00:05:58.151933  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:58.151939  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:58.151945  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:58.154488  826514 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:05:58.154502  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:58.154506  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:58.154509  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:58.154512  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:58.154516  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:58 GMT
	I0813 00:05:58.154519  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:58.156157  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","resourceVersion":"378","creationTimestamp":"2021-08-13T00:04:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813000359-820289","minikube.k8s.io/updated_at":"2021_08_13T00_04_54_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6557 chars]
	I0813 00:05:58.156407  826514 pod_ready.go:92] pod "kube-scheduler-multinode-20210813000359-820289" in "kube-system" namespace has status "Ready":"True"
	I0813 00:05:58.156420  826514 pod_ready.go:81] duration metric: took 8.919354ms waiting for pod "kube-scheduler-multinode-20210813000359-820289" in "kube-system" namespace to be "Ready" ...
	I0813 00:05:58.156430  826514 pod_ready.go:38] duration metric: took 6.425891571s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 00:05:58.156456  826514 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 00:05:58.156503  826514 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 00:05:58.168297  826514 system_svc.go:56] duration metric: took 11.837498ms WaitForService to wait for kubelet.
	I0813 00:05:58.168315  826514 kubeadm.go:547] duration metric: took 6.456243842s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 00:05:58.168332  826514 node_conditions.go:102] verifying NodePressure condition ...
	I0813 00:05:58.168378  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes
	I0813 00:05:58.168387  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:58.168391  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:58.168395  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:58.172806  826514 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0813 00:05:58.172819  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:58.172825  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:58.172830  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:58.172834  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:58.172839  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:58.172844  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:58 GMT
	I0813 00:05:58.173613  826514 request.go:1123] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"572"},"items":[{"metadata":{"name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","resourceVersion":"378","creationTimestamp":"2021-08-13T00:04:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813000359-820289","minikube.k8s.io/updated_at":"2021_08_13T00_04_54_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-mana
ged-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","opera [truncated 13204 chars]
	I0813 00:05:58.173957  826514 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 00:05:58.173977  826514 node_conditions.go:123] node cpu capacity is 2
	I0813 00:05:58.173993  826514 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 00:05:58.174000  826514 node_conditions.go:123] node cpu capacity is 2
	I0813 00:05:58.174012  826514 node_conditions.go:105] duration metric: took 5.670044ms to run NodePressure ...
	I0813 00:05:58.174025  826514 start.go:231] waiting for startup goroutines ...
	I0813 00:05:58.217590  826514 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0813 00:05:58.220259  826514 out.go:177] * Done! kubectl is now configured to use "multinode-20210813000359-820289" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Fri 2021-08-13 00:04:10 UTC, end at Fri 2021-08-13 00:06:30 UTC. --
	Aug 13 00:06:30 multinode-20210813000359-820289 crio[2072]: time="2021-08-13 00:06:30.236445142Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bb24f7c3733accb9bf83243432a7b1dfe71c6ba332858cb895ee4b90e164c7e,PodSandboxId:8ebe136452482398b02ed3194e73d7f3b47c2ebe0c8636a1a5ea7c3bf727fd52,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,State:CONTAINER_RUNNING,CreatedAt:1628813162681483299,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-84b6686758-gpb9d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2d74316-9ad3-435c-85a3-19e862cd06d2,},Annotations:map[string]string{io.kubernetes.container.hash: c2687e87,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d93d4cc078f93deae8a572b1cfdba5fb94f5fee55d54e5f8a31839e13954a0a9,PodSandboxId:87950fe11799e2db85a712b095c5a589e895487c070a8985dade001cc54d69d3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1,State:CONTAINER_RUNNING,CreatedAt:1628813110965706133,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rzxjz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 650bf88e-f784-45f9-8943-257e984acedb,},Annotations:map[string]string{io.kubernetes.container.hash: d48443f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30246750ae65a222bbebc82aec9e04ccd550ff645f3c13528cdd844fc3506ed9,PodSandboxId:ebf76d4c0e9109a98390ea67a365790c31f2b939162be891950db40068893dd2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628813110411451912,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9999a063-d32c-4253-8af3-7c28fdc3c692,},Annotations:map[string]string{io.kubernetes.container.hash: 504fcb50,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef6034d32143b6615e6876df647ca585cddd90a8020c7be214bf70a4392fc14f,PodSandboxId:df2707b50f79d6820754a6b0ce62caf95885b3dc8baa43d012d4da33484856c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628813109426681117,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-sstrb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16f6c77d-26a2-47e7-9c19-74736961cc13,},Annotations:map[string]string{io.kubernetes.container.hash: de0b46f5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:197270e3714f8da106e4432573059d334400188532dad024e8659b6ca68d3950,PodSandboxId:8bea8dc2306da2e31aa4367a6dc4dafa83e1f89129127b9ce0be1002b821ca45,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628813107623828740,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tvtvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b54ef6d-43f6-4dc1-b3be-c5fb
1b57a108,},Annotations:map[string]string{io.kubernetes.container.hash: 2b7f9cc8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:147ad965e8ea44e04be64c13be2c8fc9afa937d704566b412ecfcb2b1e079d95,PodSandboxId:5292cf223a2a94890741821b0cdf167047413c472b5b1094e36fc4e9938c133b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628813085373769587,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 113dba97bad3e83d4c789adae2
059392,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15143b6bceb4a8214e28179fabcae7dd97bf707e8479bff0016061130ed21a6b,PodSandboxId:9801a58b23d0586bb52d693945d09168ddbceabbd92383ddac6da7af798ee977,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628813085209894855,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc4647bddd439c8d0983a3b358a72513,},Annotations:map[string]string{io.k
ubernetes.container.hash: 2b8db17f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd7113723a04b479e05376effd00d05266a436fc3ad70a9db4d776e661744766,PodSandboxId:4a5904acbb4c46b25b53d6d1e794ab382871dec2032c217ee2bd4aad27dc7e34,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628813085052726011,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e7d0bc335d72432dc6bd22d4541dfbd,},
Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe1711400a92c91e4a417968e5aa7d64e9b5f216105c1ce6f378be5ba2438982,PodSandboxId:586605455cab03d2216a4dc4956feebd0ba82f7335f25f9df737fdfa8afd1cbf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628813084927482208,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0dcc263218298eb0bc9dd91ad6c2c6d,},Anno
tations:map[string]string{io.kubernetes.container.hash: f554a0df,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f7ac89cf-8be1-4876-b2b8-e27328ddca32 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:06:30 multinode-20210813000359-820289 crio[2072]: time="2021-08-13 00:06:30.640927525Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=864ad63e-5e1f-45b5-a2f2-304c2f780efa name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:06:30 multinode-20210813000359-820289 crio[2072]: time="2021-08-13 00:06:30.640989493Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=864ad63e-5e1f-45b5-a2f2-304c2f780efa name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:06:30 multinode-20210813000359-820289 crio[2072]: time="2021-08-13 00:06:30.641278576Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bb24f7c3733accb9bf83243432a7b1dfe71c6ba332858cb895ee4b90e164c7e,PodSandboxId:8ebe136452482398b02ed3194e73d7f3b47c2ebe0c8636a1a5ea7c3bf727fd52,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,State:CONTAINER_RUNNING,CreatedAt:1628813162681483299,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-84b6686758-gpb9d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2d74316-9ad3-435c-85a3-19e862cd06d2,},Annotations:map[string]string{io.kubernetes.container.hash: c2687e87,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d93d4cc078f93deae8a572b1cfdba5fb94f5fee55d54e5f8a31839e13954a0a9,PodSandboxId:87950fe11799e2db85a712b095c5a589e895487c070a8985dade001cc54d69d3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1,State:CONTAINER_RUNNING,CreatedAt:1628813110965706133,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rzxjz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 650bf88e-f784-45f9-8943-257e984acedb,},Annotations:map[string]string{io.kubernetes.container.hash: d48443f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30246750ae65a222bbebc82aec9e04ccd550ff645f3c13528cdd844fc3506ed9,PodSandboxId:ebf76d4c0e9109a98390ea67a365790c31f2b939162be891950db40068893dd2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628813110411451912,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9999a063-d32c-4253-8af3-7c28fdc3c692,},Annotations:map[string]string{io.kubernetes.container.hash: 504fcb50,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef6034d32143b6615e6876df647ca585cddd90a8020c7be214bf70a4392fc14f,PodSandboxId:df2707b50f79d6820754a6b0ce62caf95885b3dc8baa43d012d4da33484856c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628813109426681117,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-sstrb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16f6c77d-26a2-47e7-9c19-74736961cc13,},Annotations:map[string]string{io.kubernetes.container.hash: de0b46f5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:197270e3714f8da106e4432573059d334400188532dad024e8659b6ca68d3950,PodSandboxId:8bea8dc2306da2e31aa4367a6dc4dafa83e1f89129127b9ce0be1002b821ca45,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628813107623828740,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tvtvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b54ef6d-43f6-4dc1-b3be-c5fb
1b57a108,},Annotations:map[string]string{io.kubernetes.container.hash: 2b7f9cc8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:147ad965e8ea44e04be64c13be2c8fc9afa937d704566b412ecfcb2b1e079d95,PodSandboxId:5292cf223a2a94890741821b0cdf167047413c472b5b1094e36fc4e9938c133b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628813085373769587,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 113dba97bad3e83d4c789adae2
059392,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15143b6bceb4a8214e28179fabcae7dd97bf707e8479bff0016061130ed21a6b,PodSandboxId:9801a58b23d0586bb52d693945d09168ddbceabbd92383ddac6da7af798ee977,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628813085209894855,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc4647bddd439c8d0983a3b358a72513,},Annotations:map[string]string{io.k
ubernetes.container.hash: 2b8db17f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd7113723a04b479e05376effd00d05266a436fc3ad70a9db4d776e661744766,PodSandboxId:4a5904acbb4c46b25b53d6d1e794ab382871dec2032c217ee2bd4aad27dc7e34,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628813085052726011,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e7d0bc335d72432dc6bd22d4541dfbd,},
Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe1711400a92c91e4a417968e5aa7d64e9b5f216105c1ce6f378be5ba2438982,PodSandboxId:586605455cab03d2216a4dc4956feebd0ba82f7335f25f9df737fdfa8afd1cbf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628813084927482208,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0dcc263218298eb0bc9dd91ad6c2c6d,},Anno
tations:map[string]string{io.kubernetes.container.hash: f554a0df,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=864ad63e-5e1f-45b5-a2f2-304c2f780efa name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:06:30 multinode-20210813000359-820289 crio[2072]: time="2021-08-13 00:06:30.679684561Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5747e18a-9c09-4de7-bbde-9537acda9bd3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:06:30 multinode-20210813000359-820289 crio[2072]: time="2021-08-13 00:06:30.679743940Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5747e18a-9c09-4de7-bbde-9537acda9bd3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:06:30 multinode-20210813000359-820289 crio[2072]: time="2021-08-13 00:06:30.680269381Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bb24f7c3733accb9bf83243432a7b1dfe71c6ba332858cb895ee4b90e164c7e,PodSandboxId:8ebe136452482398b02ed3194e73d7f3b47c2ebe0c8636a1a5ea7c3bf727fd52,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,State:CONTAINER_RUNNING,CreatedAt:1628813162681483299,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-84b6686758-gpb9d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2d74316-9ad3-435c-85a3-19e862cd06d2,},Annotations:map[string]string{io.kubernetes.container.hash: c2687e87,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d93d4cc078f93deae8a572b1cfdba5fb94f5fee55d54e5f8a31839e13954a0a9,PodSandboxId:87950fe11799e2db85a712b095c5a589e895487c070a8985dade001cc54d69d3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1,State:CONTAINER_RUNNING,CreatedAt:1628813110965706133,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rzxjz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 650bf88e-f784-45f9-8943-257e984acedb,},Annotations:map[string]string{io.kubernetes.container.hash: d48443f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30246750ae65a222bbebc82aec9e04ccd550ff645f3c13528cdd844fc3506ed9,PodSandboxId:ebf76d4c0e9109a98390ea67a365790c31f2b939162be891950db40068893dd2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628813110411451912,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9999a063-d32c-4253-8af3-7c28fdc3c692,},Annotations:map[string]string{io.kubernetes.container.hash: 504fcb50,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef6034d32143b6615e6876df647ca585cddd90a8020c7be214bf70a4392fc14f,PodSandboxId:df2707b50f79d6820754a6b0ce62caf95885b3dc8baa43d012d4da33484856c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628813109426681117,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-sstrb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16f6c77d-26a2-47e7-9c19-74736961cc13,},Annotations:map[string]string{io.kubernetes.container.hash: de0b46f5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:197270e3714f8da106e4432573059d334400188532dad024e8659b6ca68d3950,PodSandboxId:8bea8dc2306da2e31aa4367a6dc4dafa83e1f89129127b9ce0be1002b821ca45,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628813107623828740,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tvtvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b54ef6d-43f6-4dc1-b3be-c5fb
1b57a108,},Annotations:map[string]string{io.kubernetes.container.hash: 2b7f9cc8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:147ad965e8ea44e04be64c13be2c8fc9afa937d704566b412ecfcb2b1e079d95,PodSandboxId:5292cf223a2a94890741821b0cdf167047413c472b5b1094e36fc4e9938c133b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628813085373769587,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 113dba97bad3e83d4c789adae2
059392,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15143b6bceb4a8214e28179fabcae7dd97bf707e8479bff0016061130ed21a6b,PodSandboxId:9801a58b23d0586bb52d693945d09168ddbceabbd92383ddac6da7af798ee977,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628813085209894855,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc4647bddd439c8d0983a3b358a72513,},Annotations:map[string]string{io.k
ubernetes.container.hash: 2b8db17f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd7113723a04b479e05376effd00d05266a436fc3ad70a9db4d776e661744766,PodSandboxId:4a5904acbb4c46b25b53d6d1e794ab382871dec2032c217ee2bd4aad27dc7e34,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628813085052726011,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e7d0bc335d72432dc6bd22d4541dfbd,},
Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe1711400a92c91e4a417968e5aa7d64e9b5f216105c1ce6f378be5ba2438982,PodSandboxId:586605455cab03d2216a4dc4956feebd0ba82f7335f25f9df737fdfa8afd1cbf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628813084927482208,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0dcc263218298eb0bc9dd91ad6c2c6d,},Anno
tations:map[string]string{io.kubernetes.container.hash: f554a0df,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5747e18a-9c09-4de7-bbde-9537acda9bd3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:06:30 multinode-20210813000359-820289 crio[2072]: time="2021-08-13 00:06:30.721813751Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=146e78c3-0e52-4d88-a608-1e4548f014fc name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:06:30 multinode-20210813000359-820289 crio[2072]: time="2021-08-13 00:06:30.721870115Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=146e78c3-0e52-4d88-a608-1e4548f014fc name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:06:30 multinode-20210813000359-820289 crio[2072]: time="2021-08-13 00:06:30.722039841Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bb24f7c3733accb9bf83243432a7b1dfe71c6ba332858cb895ee4b90e164c7e,PodSandboxId:8ebe136452482398b02ed3194e73d7f3b47c2ebe0c8636a1a5ea7c3bf727fd52,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,State:CONTAINER_RUNNING,CreatedAt:1628813162681483299,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-84b6686758-gpb9d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2d74316-9ad3-435c-85a3-19e862cd06d2,},Annotations:map[string]string{io.kubernetes.container.hash: c2687e87,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d93d4cc078f93deae8a572b1cfdba5fb94f5fee55d54e5f8a31839e13954a0a9,PodSandboxId:87950fe11799e2db85a712b095c5a589e895487c070a8985dade001cc54d69d3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1,State:CONTAINER_RUNNING,CreatedAt:1628813110965706133,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rzxjz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 650bf88e-f784-45f9-8943-257e984acedb,},Annotations:map[string]string{io.kubernetes.container.hash: d48443f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30246750ae65a222bbebc82aec9e04ccd550ff645f3c13528cdd844fc3506ed9,PodSandboxId:ebf76d4c0e9109a98390ea67a365790c31f2b939162be891950db40068893dd2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628813110411451912,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9999a063-d32c-4253-8af3-7c28fdc3c692,},Annotations:map[string]string{io.kubernetes.container.hash: 504fcb50,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef6034d32143b6615e6876df647ca585cddd90a8020c7be214bf70a4392fc14f,PodSandboxId:df2707b50f79d6820754a6b0ce62caf95885b3dc8baa43d012d4da33484856c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628813109426681117,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-sstrb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16f6c77d-26a2-47e7-9c19-74736961cc13,},Annotations:map[string]string{io.kubernetes.container.hash: de0b46f5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:197270e3714f8da106e4432573059d334400188532dad024e8659b6ca68d3950,PodSandboxId:8bea8dc2306da2e31aa4367a6dc4dafa83e1f89129127b9ce0be1002b821ca45,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628813107623828740,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tvtvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b54ef6d-43f6-4dc1-b3be-c5fb
1b57a108,},Annotations:map[string]string{io.kubernetes.container.hash: 2b7f9cc8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:147ad965e8ea44e04be64c13be2c8fc9afa937d704566b412ecfcb2b1e079d95,PodSandboxId:5292cf223a2a94890741821b0cdf167047413c472b5b1094e36fc4e9938c133b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628813085373769587,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 113dba97bad3e83d4c789adae2
059392,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15143b6bceb4a8214e28179fabcae7dd97bf707e8479bff0016061130ed21a6b,PodSandboxId:9801a58b23d0586bb52d693945d09168ddbceabbd92383ddac6da7af798ee977,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628813085209894855,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc4647bddd439c8d0983a3b358a72513,},Annotations:map[string]string{io.k
ubernetes.container.hash: 2b8db17f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd7113723a04b479e05376effd00d05266a436fc3ad70a9db4d776e661744766,PodSandboxId:4a5904acbb4c46b25b53d6d1e794ab382871dec2032c217ee2bd4aad27dc7e34,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628813085052726011,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e7d0bc335d72432dc6bd22d4541dfbd,},
Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe1711400a92c91e4a417968e5aa7d64e9b5f216105c1ce6f378be5ba2438982,PodSandboxId:586605455cab03d2216a4dc4956feebd0ba82f7335f25f9df737fdfa8afd1cbf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628813084927482208,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0dcc263218298eb0bc9dd91ad6c2c6d,},Anno
tations:map[string]string{io.kubernetes.container.hash: f554a0df,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=146e78c3-0e52-4d88-a608-1e4548f014fc name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:06:30 multinode-20210813000359-820289 crio[2072]: time="2021-08-13 00:06:30.760776611Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=000be3af-d873-45bd-960e-0c07a18970d4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:06:30 multinode-20210813000359-820289 crio[2072]: time="2021-08-13 00:06:30.760832903Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=000be3af-d873-45bd-960e-0c07a18970d4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:06:30 multinode-20210813000359-820289 crio[2072]: time="2021-08-13 00:06:30.761304647Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bb24f7c3733accb9bf83243432a7b1dfe71c6ba332858cb895ee4b90e164c7e,PodSandboxId:8ebe136452482398b02ed3194e73d7f3b47c2ebe0c8636a1a5ea7c3bf727fd52,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,State:CONTAINER_RUNNING,CreatedAt:1628813162681483299,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-84b6686758-gpb9d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2d74316-9ad3-435c-85a3-19e862cd06d2,},Annotations:map[string]string{io.kubernetes.container.hash: c2687e87,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d93d4cc078f93deae8a572b1cfdba5fb94f5fee55d54e5f8a31839e13954a0a9,PodSandboxId:87950fe11799e2db85a712b095c5a589e895487c070a8985dade001cc54d69d3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1,State:CONTAINER_RUNNING,CreatedAt:1628813110965706133,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rzxjz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 650bf88e-f784-45f9-8943-257e984acedb,},Annotations:map[string]string{io.kubernetes.container.hash: d48443f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30246750ae65a222bbebc82aec9e04ccd550ff645f3c13528cdd844fc3506ed9,PodSandboxId:ebf76d4c0e9109a98390ea67a365790c31f2b939162be891950db40068893dd2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628813110411451912,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9999a063-d32c-4253-8af3-7c28fdc3c692,},Annotations:map[string]string{io.kubernetes.container.hash: 504fcb50,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef6034d32143b6615e6876df647ca585cddd90a8020c7be214bf70a4392fc14f,PodSandboxId:df2707b50f79d6820754a6b0ce62caf95885b3dc8baa43d012d4da33484856c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628813109426681117,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-sstrb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16f6c77d-26a2-47e7-9c19-74736961cc13,},Annotations:map[string]string{io.kubernetes.container.hash: de0b46f5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:197270e3714f8da106e4432573059d334400188532dad024e8659b6ca68d3950,PodSandboxId:8bea8dc2306da2e31aa4367a6dc4dafa83e1f89129127b9ce0be1002b821ca45,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628813107623828740,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tvtvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b54ef6d-43f6-4dc1-b3be-c5fb
1b57a108,},Annotations:map[string]string{io.kubernetes.container.hash: 2b7f9cc8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:147ad965e8ea44e04be64c13be2c8fc9afa937d704566b412ecfcb2b1e079d95,PodSandboxId:5292cf223a2a94890741821b0cdf167047413c472b5b1094e36fc4e9938c133b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628813085373769587,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 113dba97bad3e83d4c789adae2
059392,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15143b6bceb4a8214e28179fabcae7dd97bf707e8479bff0016061130ed21a6b,PodSandboxId:9801a58b23d0586bb52d693945d09168ddbceabbd92383ddac6da7af798ee977,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628813085209894855,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc4647bddd439c8d0983a3b358a72513,},Annotations:map[string]string{io.k
ubernetes.container.hash: 2b8db17f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd7113723a04b479e05376effd00d05266a436fc3ad70a9db4d776e661744766,PodSandboxId:4a5904acbb4c46b25b53d6d1e794ab382871dec2032c217ee2bd4aad27dc7e34,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628813085052726011,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e7d0bc335d72432dc6bd22d4541dfbd,},
Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe1711400a92c91e4a417968e5aa7d64e9b5f216105c1ce6f378be5ba2438982,PodSandboxId:586605455cab03d2216a4dc4956feebd0ba82f7335f25f9df737fdfa8afd1cbf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628813084927482208,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0dcc263218298eb0bc9dd91ad6c2c6d,},Anno
tations:map[string]string{io.kubernetes.container.hash: f554a0df,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=000be3af-d873-45bd-960e-0c07a18970d4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:06:30 multinode-20210813000359-820289 crio[2072]: time="2021-08-13 00:06:30.814013547Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=95a10107-df84-4bb5-9c59-b24ad662f1c9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:06:30 multinode-20210813000359-820289 crio[2072]: time="2021-08-13 00:06:30.814068720Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=95a10107-df84-4bb5-9c59-b24ad662f1c9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:06:30 multinode-20210813000359-820289 crio[2072]: time="2021-08-13 00:06:30.814320427Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bb24f7c3733accb9bf83243432a7b1dfe71c6ba332858cb895ee4b90e164c7e,PodSandboxId:8ebe136452482398b02ed3194e73d7f3b47c2ebe0c8636a1a5ea7c3bf727fd52,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,State:CONTAINER_RUNNING,CreatedAt:1628813162681483299,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-84b6686758-gpb9d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2d74316-9ad3-435c-85a3-19e862cd06d2,},Annotations:map[string]string{io.kubernetes.container.hash: c2687e87,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d93d4cc078f93deae8a572b1cfdba5fb94f5fee55d54e5f8a31839e13954a0a9,PodSandboxId:87950fe11799e2db85a712b095c5a589e895487c070a8985dade001cc54d69d3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1,State:CONTAINER_RUNNING,CreatedAt:1628813110965706133,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rzxjz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 650bf88e-f784-45f9-8943-257e984acedb,},Annotations:map[string]string{io.kubernetes.container.hash: d48443f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30246750ae65a222bbebc82aec9e04ccd550ff645f3c13528cdd844fc3506ed9,PodSandboxId:ebf76d4c0e9109a98390ea67a365790c31f2b939162be891950db40068893dd2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628813110411451912,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9999a063-d32c-4253-8af3-7c28fdc3c692,},Annotations:map[string]string{io.kubernetes.container.hash: 504fcb50,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef6034d32143b6615e6876df647ca585cddd90a8020c7be214bf70a4392fc14f,PodSandboxId:df2707b50f79d6820754a6b0ce62caf95885b3dc8baa43d012d4da33484856c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628813109426681117,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-sstrb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16f6c77d-26a2-47e7-9c19-74736961cc13,},Annotations:map[string]string{io.kubernetes.container.hash: de0b46f5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:197270e3714f8da106e4432573059d334400188532dad024e8659b6ca68d3950,PodSandboxId:8bea8dc2306da2e31aa4367a6dc4dafa83e1f89129127b9ce0be1002b821ca45,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628813107623828740,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tvtvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b54ef6d-43f6-4dc1-b3be-c5fb
1b57a108,},Annotations:map[string]string{io.kubernetes.container.hash: 2b7f9cc8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:147ad965e8ea44e04be64c13be2c8fc9afa937d704566b412ecfcb2b1e079d95,PodSandboxId:5292cf223a2a94890741821b0cdf167047413c472b5b1094e36fc4e9938c133b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628813085373769587,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 113dba97bad3e83d4c789adae2
059392,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15143b6bceb4a8214e28179fabcae7dd97bf707e8479bff0016061130ed21a6b,PodSandboxId:9801a58b23d0586bb52d693945d09168ddbceabbd92383ddac6da7af798ee977,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628813085209894855,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc4647bddd439c8d0983a3b358a72513,},Annotations:map[string]string{io.k
ubernetes.container.hash: 2b8db17f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd7113723a04b479e05376effd00d05266a436fc3ad70a9db4d776e661744766,PodSandboxId:4a5904acbb4c46b25b53d6d1e794ab382871dec2032c217ee2bd4aad27dc7e34,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628813085052726011,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e7d0bc335d72432dc6bd22d4541dfbd,},
Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe1711400a92c91e4a417968e5aa7d64e9b5f216105c1ce6f378be5ba2438982,PodSandboxId:586605455cab03d2216a4dc4956feebd0ba82f7335f25f9df737fdfa8afd1cbf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628813084927482208,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0dcc263218298eb0bc9dd91ad6c2c6d,},Anno
tations:map[string]string{io.kubernetes.container.hash: f554a0df,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=95a10107-df84-4bb5-9c59-b24ad662f1c9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:06:30 multinode-20210813000359-820289 crio[2072]: time="2021-08-13 00:06:30.860723351Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3808c2c5-37c4-49a9-8d1c-1175a7db7b8e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:06:30 multinode-20210813000359-820289 crio[2072]: time="2021-08-13 00:06:30.860856939Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3808c2c5-37c4-49a9-8d1c-1175a7db7b8e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:06:30 multinode-20210813000359-820289 crio[2072]: time="2021-08-13 00:06:30.861045523Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bb24f7c3733accb9bf83243432a7b1dfe71c6ba332858cb895ee4b90e164c7e,PodSandboxId:8ebe136452482398b02ed3194e73d7f3b47c2ebe0c8636a1a5ea7c3bf727fd52,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,State:CONTAINER_RUNNING,CreatedAt:1628813162681483299,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-84b6686758-gpb9d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2d74316-9ad3-435c-85a3-19e862cd06d2,},Annotations:map[string]string{io.kubernetes.container.hash: c2687e87,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d93d4cc078f93deae8a572b1cfdba5fb94f5fee55d54e5f8a31839e13954a0a9,PodSandboxId:87950fe11799e2db85a712b095c5a589e895487c070a8985dade001cc54d69d3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1,State:CONTAINER_RUNNING,CreatedAt:1628813110965706133,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rzxjz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 650bf88e-f784-45f9-8943-257e984acedb,},Annotations:map[string]string{io.kubernetes.container.hash: d48443f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30246750ae65a222bbebc82aec9e04ccd550ff645f3c13528cdd844fc3506ed9,PodSandboxId:ebf76d4c0e9109a98390ea67a365790c31f2b939162be891950db40068893dd2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628813110411451912,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9999a063-d32c-4253-8af3-7c28fdc3c692,},Annotations:map[string]string{io.kubernetes.container.hash: 504fcb50,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef6034d32143b6615e6876df647ca585cddd90a8020c7be214bf70a4392fc14f,PodSandboxId:df2707b50f79d6820754a6b0ce62caf95885b3dc8baa43d012d4da33484856c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628813109426681117,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-sstrb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16f6c77d-26a2-47e7-9c19-74736961cc13,},Annotations:map[string]string{io.kubernetes.container.hash: de0b46f5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:197270e3714f8da106e4432573059d334400188532dad024e8659b6ca68d3950,PodSandboxId:8bea8dc2306da2e31aa4367a6dc4dafa83e1f89129127b9ce0be1002b821ca45,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628813107623828740,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tvtvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b54ef6d-43f6-4dc1-b3be-c5fb
1b57a108,},Annotations:map[string]string{io.kubernetes.container.hash: 2b7f9cc8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:147ad965e8ea44e04be64c13be2c8fc9afa937d704566b412ecfcb2b1e079d95,PodSandboxId:5292cf223a2a94890741821b0cdf167047413c472b5b1094e36fc4e9938c133b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628813085373769587,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 113dba97bad3e83d4c789adae2
059392,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15143b6bceb4a8214e28179fabcae7dd97bf707e8479bff0016061130ed21a6b,PodSandboxId:9801a58b23d0586bb52d693945d09168ddbceabbd92383ddac6da7af798ee977,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628813085209894855,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc4647bddd439c8d0983a3b358a72513,},Annotations:map[string]string{io.k
ubernetes.container.hash: 2b8db17f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd7113723a04b479e05376effd00d05266a436fc3ad70a9db4d776e661744766,PodSandboxId:4a5904acbb4c46b25b53d6d1e794ab382871dec2032c217ee2bd4aad27dc7e34,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628813085052726011,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e7d0bc335d72432dc6bd22d4541dfbd,},
Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe1711400a92c91e4a417968e5aa7d64e9b5f216105c1ce6f378be5ba2438982,PodSandboxId:586605455cab03d2216a4dc4956feebd0ba82f7335f25f9df737fdfa8afd1cbf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628813084927482208,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0dcc263218298eb0bc9dd91ad6c2c6d,},Anno
tations:map[string]string{io.kubernetes.container.hash: f554a0df,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3808c2c5-37c4-49a9-8d1c-1175a7db7b8e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:06:30 multinode-20210813000359-820289 crio[2072]: time="2021-08-13 00:06:30.894532339Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7c58e8c4-92a1-4e03-8a83-167b13889f67 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:06:30 multinode-20210813000359-820289 crio[2072]: time="2021-08-13 00:06:30.894583338Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7c58e8c4-92a1-4e03-8a83-167b13889f67 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:06:30 multinode-20210813000359-820289 crio[2072]: time="2021-08-13 00:06:30.894786783Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bb24f7c3733accb9bf83243432a7b1dfe71c6ba332858cb895ee4b90e164c7e,PodSandboxId:8ebe136452482398b02ed3194e73d7f3b47c2ebe0c8636a1a5ea7c3bf727fd52,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,State:CONTAINER_RUNNING,CreatedAt:1628813162681483299,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-84b6686758-gpb9d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2d74316-9ad3-435c-85a3-19e862cd06d2,},Annotations:map[string]string{io.kubernetes.container.hash: c2687e87,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d93d4cc078f93deae8a572b1cfdba5fb94f5fee55d54e5f8a31839e13954a0a9,PodSandboxId:87950fe11799e2db85a712b095c5a589e895487c070a8985dade001cc54d69d3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1,State:CONTAINER_RUNNING,CreatedAt:1628813110965706133,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rzxjz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 650bf88e-f784-45f9-8943-257e984acedb,},Annotations:map[string]string{io.kubernetes.container.hash: d48443f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30246750ae65a222bbebc82aec9e04ccd550ff645f3c13528cdd844fc3506ed9,PodSandboxId:ebf76d4c0e9109a98390ea67a365790c31f2b939162be891950db40068893dd2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628813110411451912,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9999a063-d32c-4253-8af3-7c28fdc3c692,},Annotations:map[string]string{io.kubernetes.container.hash: 504fcb50,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef6034d32143b6615e6876df647ca585cddd90a8020c7be214bf70a4392fc14f,PodSandboxId:df2707b50f79d6820754a6b0ce62caf95885b3dc8baa43d012d4da33484856c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628813109426681117,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-sstrb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16f6c77d-26a2-47e7-9c19-74736961cc13,},Annotations:map[string]string{io.kubernetes.container.hash: de0b46f5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:197270e3714f8da106e4432573059d334400188532dad024e8659b6ca68d3950,PodSandboxId:8bea8dc2306da2e31aa4367a6dc4dafa83e1f89129127b9ce0be1002b821ca45,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628813107623828740,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tvtvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b54ef6d-43f6-4dc1-b3be-c5fb
1b57a108,},Annotations:map[string]string{io.kubernetes.container.hash: 2b7f9cc8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:147ad965e8ea44e04be64c13be2c8fc9afa937d704566b412ecfcb2b1e079d95,PodSandboxId:5292cf223a2a94890741821b0cdf167047413c472b5b1094e36fc4e9938c133b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628813085373769587,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 113dba97bad3e83d4c789adae2
059392,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15143b6bceb4a8214e28179fabcae7dd97bf707e8479bff0016061130ed21a6b,PodSandboxId:9801a58b23d0586bb52d693945d09168ddbceabbd92383ddac6da7af798ee977,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628813085209894855,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc4647bddd439c8d0983a3b358a72513,},Annotations:map[string]string{io.k
ubernetes.container.hash: 2b8db17f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd7113723a04b479e05376effd00d05266a436fc3ad70a9db4d776e661744766,PodSandboxId:4a5904acbb4c46b25b53d6d1e794ab382871dec2032c217ee2bd4aad27dc7e34,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628813085052726011,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e7d0bc335d72432dc6bd22d4541dfbd,},
Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe1711400a92c91e4a417968e5aa7d64e9b5f216105c1ce6f378be5ba2438982,PodSandboxId:586605455cab03d2216a4dc4956feebd0ba82f7335f25f9df737fdfa8afd1cbf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628813084927482208,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0dcc263218298eb0bc9dd91ad6c2c6d,},Anno
tations:map[string]string{io.kubernetes.container.hash: f554a0df,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7c58e8c4-92a1-4e03-8a83-167b13889f67 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:06:30 multinode-20210813000359-820289 crio[2072]: time="2021-08-13 00:06:30.932773647Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=966a21d0-8c08-4e4b-b664-d1a631a428c0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:06:30 multinode-20210813000359-820289 crio[2072]: time="2021-08-13 00:06:30.932857074Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=966a21d0-8c08-4e4b-b664-d1a631a428c0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:06:30 multinode-20210813000359-820289 crio[2072]: time="2021-08-13 00:06:30.933045350Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bb24f7c3733accb9bf83243432a7b1dfe71c6ba332858cb895ee4b90e164c7e,PodSandboxId:8ebe136452482398b02ed3194e73d7f3b47c2ebe0c8636a1a5ea7c3bf727fd52,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,State:CONTAINER_RUNNING,CreatedAt:1628813162681483299,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-84b6686758-gpb9d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2d74316-9ad3-435c-85a3-19e862cd06d2,},Annotations:map[string]string{io.kubernetes.container.hash: c2687e87,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d93d4cc078f93deae8a572b1cfdba5fb94f5fee55d54e5f8a31839e13954a0a9,PodSandboxId:87950fe11799e2db85a712b095c5a589e895487c070a8985dade001cc54d69d3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1,State:CONTAINER_RUNNING,CreatedAt:1628813110965706133,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rzxjz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 650bf88e-f784-45f9-8943-257e984acedb,},Annotations:map[string]string{io.kubernetes.container.hash: d48443f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30246750ae65a222bbebc82aec9e04ccd550ff645f3c13528cdd844fc3506ed9,PodSandboxId:ebf76d4c0e9109a98390ea67a365790c31f2b939162be891950db40068893dd2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628813110411451912,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9999a063-d32c-4253-8af3-7c28fdc3c692,},Annotations:map[string]string{io.kubernetes.container.hash: 504fcb50,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef6034d32143b6615e6876df647ca585cddd90a8020c7be214bf70a4392fc14f,PodSandboxId:df2707b50f79d6820754a6b0ce62caf95885b3dc8baa43d012d4da33484856c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628813109426681117,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-sstrb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16f6c77d-26a2-47e7-9c19-74736961cc13,},Annotations:map[string]string{io.kubernetes.container.hash: de0b46f5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:197270e3714f8da106e4432573059d334400188532dad024e8659b6ca68d3950,PodSandboxId:8bea8dc2306da2e31aa4367a6dc4dafa83e1f89129127b9ce0be1002b821ca45,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628813107623828740,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tvtvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b54ef6d-43f6-4dc1-b3be-c5fb
1b57a108,},Annotations:map[string]string{io.kubernetes.container.hash: 2b7f9cc8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:147ad965e8ea44e04be64c13be2c8fc9afa937d704566b412ecfcb2b1e079d95,PodSandboxId:5292cf223a2a94890741821b0cdf167047413c472b5b1094e36fc4e9938c133b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628813085373769587,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 113dba97bad3e83d4c789adae2
059392,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15143b6bceb4a8214e28179fabcae7dd97bf707e8479bff0016061130ed21a6b,PodSandboxId:9801a58b23d0586bb52d693945d09168ddbceabbd92383ddac6da7af798ee977,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628813085209894855,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc4647bddd439c8d0983a3b358a72513,},Annotations:map[string]string{io.k
ubernetes.container.hash: 2b8db17f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd7113723a04b479e05376effd00d05266a436fc3ad70a9db4d776e661744766,PodSandboxId:4a5904acbb4c46b25b53d6d1e794ab382871dec2032c217ee2bd4aad27dc7e34,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628813085052726011,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e7d0bc335d72432dc6bd22d4541dfbd,},
Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe1711400a92c91e4a417968e5aa7d64e9b5f216105c1ce6f378be5ba2438982,PodSandboxId:586605455cab03d2216a4dc4956feebd0ba82f7335f25f9df737fdfa8afd1cbf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628813084927482208,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0dcc263218298eb0bc9dd91ad6c2c6d,},Anno
tations:map[string]string{io.kubernetes.container.hash: f554a0df,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=966a21d0-8c08-4e4b-b664-d1a631a428c0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                               CREATED              STATE               NAME                      ATTEMPT             POD ID
	2bb24f7c3733a       docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47   28 seconds ago       Running             busybox                   0                   8ebe136452482
	d93d4cc078f93       6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb                                    About a minute ago   Running             kindnet-cni               0                   87950fe11799e
	30246750ae65a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                    About a minute ago   Running             storage-provisioner       0                   ebf76d4c0e910
	ef6034d32143b       296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899                                    About a minute ago   Running             coredns                   0                   df2707b50f79d
	197270e3714f8       adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92                                    About a minute ago   Running             kube-proxy                0                   8bea8dc2306da
	147ad965e8ea4       6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a                                    About a minute ago   Running             kube-scheduler            0                   5292cf223a2a9
	15143b6bceb4a       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934                                    About a minute ago   Running             etcd                      0                   9801a58b23d05
	cd7113723a04b       bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9                                    About a minute ago   Running             kube-controller-manager   0                   4a5904acbb4c4
	fe1711400a92c       3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80                                    About a minute ago   Running             kube-apiserver            0                   586605455cab0
	
	* 
	* ==> coredns [ef6034d32143b6615e6876df647ca585cddd90a8020c7be214bf70a4392fc14f] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = 8f51b271a18f2ce6fcaee5f1cfda3ed0
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-20210813000359-820289
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-20210813000359-820289
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=dc1c3ca26e9449ce488a773126b8450402c94a19
	                    minikube.k8s.io/name=multinode-20210813000359-820289
	                    minikube.k8s.io/updated_at=2021_08_13T00_04_54_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 00:04:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-20210813000359-820289
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Aug 2021 00:06:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 00:06:29 +0000   Fri, 13 Aug 2021 00:04:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 00:06:29 +0000   Fri, 13 Aug 2021 00:04:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 00:06:29 +0000   Fri, 13 Aug 2021 00:04:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Aug 2021 00:06:29 +0000   Fri, 13 Aug 2021 00:05:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.22
	  Hostname:    multinode-20210813000359-820289
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2186320Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2186320Ki
	  pods:               110
	System Info:
	  Machine ID:                 d38d599800bb433e9cf69a669c4ea971
	  System UUID:                d38d5998-00bb-433e-9cf6-9a669c4ea971
	  Boot ID:                    1695864a-b84f-4769-a5d2-70e036721d1a
	  Kernel Version:             4.19.182
	  OS Image:                   Buildroot 2020.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.20.2
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-84b6686758-gpb9d                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         32s
	  kube-system                 coredns-558bd4d5db-sstrb                                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (7%!)(MISSING)     85s
	  kube-system                 etcd-multinode-20210813000359-820289                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         92s
	  kube-system                 kindnet-rzxjz                                              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      85s
	  kube-system                 kube-apiserver-multinode-20210813000359-820289             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	  kube-system                 kube-controller-manager-multinode-20210813000359-820289    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  kube-system                 kube-proxy-tvtvh                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         85s
	  kube-system                 kube-scheduler-multinode-20210813000359-820289             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From        Message
	  ----    ------                   ----  ----        -------
	  Normal  Starting                 92s   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  92s   kubelet     Node multinode-20210813000359-820289 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    92s   kubelet     Node multinode-20210813000359-820289 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     92s   kubelet     Node multinode-20210813000359-820289 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  92s   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                86s   kubelet     Node multinode-20210813000359-820289 status is now: NodeReady
	  Normal  Starting                 83s   kube-proxy  Starting kube-proxy.
	
	
	Name:               multinode-20210813000359-820289-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-20210813000359-820289-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 00:05:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-20210813000359-820289-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Aug 2021 00:06:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 00:06:21 +0000   Fri, 13 Aug 2021 00:05:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 00:06:21 +0000   Fri, 13 Aug 2021 00:05:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 00:06:21 +0000   Fri, 13 Aug 2021 00:05:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Aug 2021 00:06:21 +0000   Fri, 13 Aug 2021 00:05:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.152
	  Hostname:    multinode-20210813000359-820289-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2186320Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2186320Ki
	  pods:               110
	System Info:
	  Machine ID:                 781605be65944ef8b9544ba2115c5324
	  System UUID:                781605be-6594-4ef8-b954-4ba2115c5324
	  Boot ID:                    28e74cce-06b7-45b4-8a51-dfcbd3cde66e
	  Kernel Version:             4.19.182
	  OS Image:                   Buildroot 2020.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.20.2
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-84b6686758-p6fb8    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         32s
	  kube-system                 kindnet-dpckf               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      41s
	  kube-system                 kube-proxy-8h4t8            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 41s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  41s (x2 over 41s)  kubelet     Node multinode-20210813000359-820289-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s (x2 over 41s)  kubelet     Node multinode-20210813000359-820289-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s (x2 over 41s)  kubelet     Node multinode-20210813000359-820289-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  41s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                40s                kubelet     Node multinode-20210813000359-820289-m02 status is now: NodeReady
	  Normal  Starting                 34s                kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Aug13 00:04] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.093803] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +3.721935] Unstable clock detected, switching default tracing clock to "global"
	              If you want to keep using the local clock, then add:
	                "trace_clock=local"
	              on the kernel command line
	[  +0.000025] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.172040] systemd-fstab-generator[1160]: Ignoring "noauto" for root device
	[  +0.033747] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +0.931532] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1719 comm=systemd-network
	[  +1.025329] vboxguest: loading out-of-tree module taints kernel.
	[  +0.005870] vboxguest: PCI device not found, probably running on physical hardware.
	[  +1.431390] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[ +14.205447] systemd-fstab-generator[2161]: Ignoring "noauto" for root device
	[  +0.127782] systemd-fstab-generator[2174]: Ignoring "noauto" for root device
	[  +0.189875] systemd-fstab-generator[2202]: Ignoring "noauto" for root device
	[  +8.599155] systemd-fstab-generator[2405]: Ignoring "noauto" for root device
	[ +17.366209] systemd-fstab-generator[2821]: Ignoring "noauto" for root device
	[Aug13 00:05] kauditd_printk_skb: 38 callbacks suppressed
	[Aug13 00:06] NFSD: Unable to end grace period: -110
	
	* 
	* ==> etcd [15143b6bceb4a8214e28179fabcae7dd97bf707e8479bff0016061130ed21a6b] <==
	* 2021-08-13 00:04:50.671113 W | etcdserver: read-only range request "key:\"/registry/minions/multinode-20210813000359-820289\" " with result "range_response_count:1 size:4417" took too long (236.023093ms) to execute
	2021-08-13 00:04:50.671588 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-scheduler-multinode-20210813000359-820289\" " with result "range_response_count:0 size:4" took too long (236.74127ms) to execute
	2021-08-13 00:04:50.671819 W | etcdserver: read-only range request "key:\"/registry/csinodes/multinode-20210813000359-820289\" " with result "range_response_count:0 size:4" took too long (237.052743ms) to execute
	2021-08-13 00:04:59.705476 W | etcdserver: read-only range request "key:\"/registry/minions/multinode-20210813000359-820289\" " with result "range_response_count:1 size:5602" took too long (219.610465ms) to execute
	2021-08-13 00:04:59.707292 W | etcdserver: read-only range request "key:\"/registry/csinodes/multinode-20210813000359-820289\" " with result "range_response_count:1 size:668" took too long (220.7375ms) to execute
	2021-08-13 00:04:59.707570 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (120.536562ms) to execute
	2021-08-13 00:05:01.781354 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 00:05:08.548115 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 00:05:18.546875 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 00:05:28.546344 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 00:05:38.547071 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 00:05:42.324408 W | wal: sync duration of 1.259616242s, expected less than 1s
	2021-08-13 00:05:42.357370 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:1 size:6118" took too long (499.452475ms) to execute
	2021-08-13 00:05:42.357549 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (649.015558ms) to execute
	2021-08-13 00:05:43.465450 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:1129" took too long (390.915031ms) to execute
	2021-08-13 00:05:43.465757 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:421" took too long (1.026973983s) to execute
	2021-08-13 00:05:43.465907 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (767.15358ms) to execute
	2021-08-13 00:05:48.547140 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 00:05:50.863435 W | etcdserver: read-only range request "key:\"/registry/csinodes/multinode-20210813000359-820289-m02\" " with result "range_response_count:0 size:5" took too long (173.058336ms) to execute
	2021-08-13 00:05:50.880773 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (179.92153ms) to execute
	2021-08-13 00:05:55.812376 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (114.544574ms) to execute
	2021-08-13 00:05:58.546455 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 00:06:08.547405 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 00:06:18.547549 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 00:06:28.546990 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  00:06:31 up 2 min,  0 users,  load average: 1.09, 0.54, 0.20
	Linux multinode-20210813000359-820289 4.19.182 #1 SMP Fri Aug 6 09:11:32 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [fe1711400a92c91e4a417968e5aa7d64e9b5f216105c1ce6f378be5ba2438982] <==
	* I0813 00:04:52.165237       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0813 00:04:52.321712       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.39.22]
	I0813 00:04:52.323005       1 controller.go:611] quota admission added evaluator for: endpoints
	I0813 00:04:52.329657       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0813 00:04:52.963820       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0813 00:04:53.987647       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0813 00:04:54.063314       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0813 00:04:59.716831       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0813 00:05:06.372626       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0813 00:05:06.576938       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0813 00:05:19.544423       1 client.go:360] parsed scheme: "passthrough"
	I0813 00:05:19.544503       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 00:05:19.544527       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 00:05:42.358309       1 trace.go:205] Trace[301258205]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (13-Aug-2021 00:05:41.857) (total time: 500ms):
	Trace[301258205]: [500.998587ms] [500.998587ms] END
	I0813 00:05:42.358800       1 trace.go:205] Trace[445959692]: "List" url:/api/v1/nodes,user-agent:kindnetd/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.39.22,accept:application/json, */*,protocol:HTTP/2.0 (13-Aug-2021 00:05:41.857) (total time: 501ms):
	Trace[445959692]: ---"Listing from storage done" 501ms (00:05:00.358)
	Trace[445959692]: [501.596542ms] [501.596542ms] END
	I0813 00:05:43.468671       1 trace.go:205] Trace[1190083982]: "Get" url:/api/v1/namespaces/default/endpoints/kubernetes,user-agent:kube-apiserver/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 00:05:42.438) (total time: 1029ms):
	Trace[1190083982]: ---"About to write a response" 1028ms (00:05:00.466)
	Trace[1190083982]: [1.029915279s] [1.029915279s] END
	I0813 00:06:01.288323       1 client.go:360] parsed scheme: "passthrough"
	I0813 00:06:01.288386       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 00:06:01.288401       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	E0813 00:06:19.762255       1 upgradeaware.go:401] Error proxying data from backend to client: write tcp 192.168.39.22:8443->192.168.39.1:48532: write: connection reset by peer
	
	* 
	* ==> kube-controller-manager [cd7113723a04b479e05376effd00d05266a436fc3ad70a9db4d776e661744766] <==
	* I0813 00:05:06.026258       1 shared_informer.go:247] Caches are synced for crt configmap 
	I0813 00:05:06.102306       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0813 00:05:06.390232       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-tvtvh"
	I0813 00:05:06.399387       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-rzxjz"
	E0813 00:05:06.446097       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"e61f546c-a1b1-4412-af08-7c2ebe78d772", ResourceVersion:"288", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764409894, loc:(*time.Location)(0x72ff440)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc00035abd0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00035ac00)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.
LabelSelector)(0xc0013a2460), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Gl
usterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0013b83c0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00035a
c30), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeS
ource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00035ac60), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil),
AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.21.3", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0013a24a0)}}, Resources:v1.R
esourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001601da0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPo
licy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000fe19d8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000379b20), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runtime
ClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000f97900)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000fe1a28)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	E0813 00:05:06.453032       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"c1f4f3ef-f446-4a5a-9116-cc17ab8a2d14", ResourceVersion:"305", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764409894, loc:(*time.Location)(0x72ff440)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"kindest/kindnetd:v20210326-1e038dc5\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists
\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc00035ac90), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00035acc0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0013a2520), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"ki
ndnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00035acf0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FC
VolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00035ad20), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolume
Source)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00035ad50), EmptyDir:(*v1.Emp
tyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVo
lume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"kindest/kindnetd:v20210326-1e038dc5", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0013a2540)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0013a2580)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDe
cAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Li
fecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001601e00), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000fe1c48), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000379c00), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.Hos
tAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000f97950)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000fe1c90)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0813 00:05:06.499954       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 00:05:06.543290       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 00:05:06.543386       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0813 00:05:06.580142       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-558bd4d5db to 2"
	I0813 00:05:06.716895       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0813 00:05:06.827982       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-sstrb"
	I0813 00:05:06.859250       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-rgwt6"
	I0813 00:05:06.928919       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-rgwt6"
	W0813 00:05:50.866621       1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-20210813000359-820289-m02" does not exist
	W0813 00:05:50.873151       1 node_lifecycle_controller.go:1013] Missing timestamp for Node multinode-20210813000359-820289-m02. Assuming now as a timestamp.
	I0813 00:05:50.873727       1 event.go:291] "Event occurred" object="multinode-20210813000359-820289-m02" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-20210813000359-820289-m02 event: Registered Node multinode-20210813000359-820289-m02 in Controller"
	I0813 00:05:50.904798       1 range_allocator.go:373] Set node multinode-20210813000359-820289-m02 PodCIDR to [10.244.1.0/24]
	I0813 00:05:50.935579       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-8h4t8"
	I0813 00:05:50.943336       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-dpckf"
	I0813 00:05:59.274555       1 event.go:291] "Event occurred" object="default/busybox" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-84b6686758 to 2"
	I0813 00:05:59.291911       1 event.go:291] "Event occurred" object="default/busybox-84b6686758" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-84b6686758-p6fb8"
	I0813 00:05:59.320696       1 event.go:291] "Event occurred" object="default/busybox-84b6686758" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-84b6686758-gpb9d"
	
	* 
	* ==> kube-proxy [197270e3714f8da106e4432573059d334400188532dad024e8659b6ca68d3950] <==
	* I0813 00:05:08.157823       1 node.go:172] Successfully retrieved node IP: 192.168.39.22
	I0813 00:05:08.157940       1 server_others.go:140] Detected node IP 192.168.39.22
	W0813 00:05:08.158041       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	W0813 00:05:08.238694       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0813 00:05:08.238728       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0813 00:05:08.238744       1 server_others.go:212] Using iptables Proxier.
	I0813 00:05:08.239501       1 server.go:643] Version: v1.21.3
	I0813 00:05:08.241057       1 config.go:315] Starting service config controller
	I0813 00:05:08.241070       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0813 00:05:08.241104       1 config.go:224] Starting endpoint slice config controller
	I0813 00:05:08.241108       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0813 00:05:08.259799       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0813 00:05:08.265334       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 00:05:08.341440       1 shared_informer.go:247] Caches are synced for service config 
	I0813 00:05:08.341450       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [147ad965e8ea44e04be64c13be2c8fc9afa937d704566b412ecfcb2b1e079d95] <==
	* I0813 00:04:50.453581       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0813 00:04:50.453621       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0813 00:04:50.455372       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 00:04:50.458065       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 00:04:50.458587       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 00:04:50.458677       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 00:04:50.460389       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 00:04:50.460476       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 00:04:50.460546       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 00:04:50.460694       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 00:04:50.460745       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 00:04:50.460796       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 00:04:50.460838       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 00:04:50.460886       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 00:04:50.460933       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 00:04:50.460972       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 00:04:51.346695       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 00:04:51.413250       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 00:04:51.524533       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 00:04:51.593056       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 00:04:51.618330       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 00:04:51.619558       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 00:04:51.813783       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 00:04:51.816826       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0813 00:04:54.254249       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 00:04:10 UTC, end at Fri 2021-08-13 00:06:31 UTC. --
	Aug 13 00:05:06 multinode-20210813000359-820289 kubelet[2829]: I0813 00:05:06.439716    2829 topology_manager.go:187] "Topology Admit Handler"
	Aug 13 00:05:06 multinode-20210813000359-820289 kubelet[2829]: I0813 00:05:06.442619    2829 topology_manager.go:187] "Topology Admit Handler"
	Aug 13 00:05:06 multinode-20210813000359-820289 kubelet[2829]: W0813 00:05:06.521418    2829 watcher.go:95] Error while processing event ("/sys/fs/cgroup/devices/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b54ef6d_43f6_4dc1_b3be_c5fb1b57a108.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b54ef6d_43f6_4dc1_b3be_c5fb1b57a108.slice: no such file or directory
	Aug 13 00:05:06 multinode-20210813000359-820289 kubelet[2829]: I0813 00:05:06.551308    2829 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7b54ef6d-43f6-4dc1-b3be-c5fb1b57a108-kube-proxy\") pod \"kube-proxy-tvtvh\" (UID: \"7b54ef6d-43f6-4dc1-b3be-c5fb1b57a108\") "
	Aug 13 00:05:06 multinode-20210813000359-820289 kubelet[2829]: I0813 00:05:06.552323    2829 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7b54ef6d-43f6-4dc1-b3be-c5fb1b57a108-lib-modules\") pod \"kube-proxy-tvtvh\" (UID: \"7b54ef6d-43f6-4dc1-b3be-c5fb1b57a108\") "
	Aug 13 00:05:06 multinode-20210813000359-820289 kubelet[2829]: I0813 00:05:06.552816    2829 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/650bf88e-f784-45f9-8943-257e984acedb-xtables-lock\") pod \"kindnet-rzxjz\" (UID: \"650bf88e-f784-45f9-8943-257e984acedb\") "
	Aug 13 00:05:06 multinode-20210813000359-820289 kubelet[2829]: I0813 00:05:06.553052    2829 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvrmd\" (UniqueName: \"kubernetes.io/projected/7b54ef6d-43f6-4dc1-b3be-c5fb1b57a108-kube-api-access-fvrmd\") pod \"kube-proxy-tvtvh\" (UID: \"7b54ef6d-43f6-4dc1-b3be-c5fb1b57a108\") "
	Aug 13 00:05:06 multinode-20210813000359-820289 kubelet[2829]: I0813 00:05:06.553390    2829 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7b54ef6d-43f6-4dc1-b3be-c5fb1b57a108-xtables-lock\") pod \"kube-proxy-tvtvh\" (UID: \"7b54ef6d-43f6-4dc1-b3be-c5fb1b57a108\") "
	Aug 13 00:05:06 multinode-20210813000359-820289 kubelet[2829]: I0813 00:05:06.553607    2829 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/650bf88e-f784-45f9-8943-257e984acedb-cni-cfg\") pod \"kindnet-rzxjz\" (UID: \"650bf88e-f784-45f9-8943-257e984acedb\") "
	Aug 13 00:05:06 multinode-20210813000359-820289 kubelet[2829]: I0813 00:05:06.553826    2829 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/650bf88e-f784-45f9-8943-257e984acedb-lib-modules\") pod \"kindnet-rzxjz\" (UID: \"650bf88e-f784-45f9-8943-257e984acedb\") "
	Aug 13 00:05:06 multinode-20210813000359-820289 kubelet[2829]: I0813 00:05:06.554041    2829 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4677\" (UniqueName: \"kubernetes.io/projected/650bf88e-f784-45f9-8943-257e984acedb-kube-api-access-v4677\") pod \"kindnet-rzxjz\" (UID: \"650bf88e-f784-45f9-8943-257e984acedb\") "
	Aug 13 00:05:06 multinode-20210813000359-820289 kubelet[2829]: E0813 00:05:06.669835    2829 projected.go:293] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Aug 13 00:05:06 multinode-20210813000359-820289 kubelet[2829]: E0813 00:05:06.670054    2829 projected.go:199] Error preparing data for projected volume kube-api-access-v4677 for pod kube-system/kindnet-rzxjz: configmap "kube-root-ca.crt" not found
	Aug 13 00:05:06 multinode-20210813000359-820289 kubelet[2829]: E0813 00:05:06.670274    2829 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/projected/650bf88e-f784-45f9-8943-257e984acedb-kube-api-access-v4677 podName:650bf88e-f784-45f9-8943-257e984acedb nodeName:}" failed. No retries permitted until 2021-08-13 00:05:07.170247709 +0000 UTC m=+13.251773692 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"kube-api-access-v4677\" (UniqueName: \"kubernetes.io/projected/650bf88e-f784-45f9-8943-257e984acedb-kube-api-access-v4677\") pod \"kindnet-rzxjz\" (UID: \"650bf88e-f784-45f9-8943-257e984acedb\") : configmap \"kube-root-ca.crt\" not found"
	Aug 13 00:05:06 multinode-20210813000359-820289 kubelet[2829]: I0813 00:05:06.854604    2829 topology_manager.go:187] "Topology Admit Handler"
	Aug 13 00:05:06 multinode-20210813000359-820289 kubelet[2829]: I0813 00:05:06.863466    2829 topology_manager.go:187] "Topology Admit Handler"
	Aug 13 00:05:06 multinode-20210813000359-820289 kubelet[2829]: I0813 00:05:06.957159    2829 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16f6c77d-26a2-47e7-9c19-74736961cc13-config-volume\") pod \"coredns-558bd4d5db-sstrb\" (UID: \"16f6c77d-26a2-47e7-9c19-74736961cc13\") "
	Aug 13 00:05:06 multinode-20210813000359-820289 kubelet[2829]: I0813 00:05:06.957533    2829 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfx7l\" (UniqueName: \"kubernetes.io/projected/16f6c77d-26a2-47e7-9c19-74736961cc13-kube-api-access-vfx7l\") pod \"coredns-558bd4d5db-sstrb\" (UID: \"16f6c77d-26a2-47e7-9c19-74736961cc13\") "
	Aug 13 00:05:09 multinode-20210813000359-820289 kubelet[2829]: I0813 00:05:09.130357    2829 topology_manager.go:187] "Topology Admit Handler"
	Aug 13 00:05:09 multinode-20210813000359-820289 kubelet[2829]: I0813 00:05:09.208141    2829 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqw9b\" (UniqueName: \"kubernetes.io/projected/9999a063-d32c-4253-8af3-7c28fdc3c692-kube-api-access-wqw9b\") pod \"storage-provisioner\" (UID: \"9999a063-d32c-4253-8af3-7c28fdc3c692\") "
	Aug 13 00:05:09 multinode-20210813000359-820289 kubelet[2829]: I0813 00:05:09.209065    2829 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9999a063-d32c-4253-8af3-7c28fdc3c692-tmp\") pod \"storage-provisioner\" (UID: \"9999a063-d32c-4253-8af3-7c28fdc3c692\") "
	Aug 13 00:05:10 multinode-20210813000359-820289 kubelet[2829]: E0813 00:05:10.052730    2829 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/650bf88e-f784-45f9-8943-257e984acedb/etc-hosts with error exit status 1" pod="kube-system/kindnet-rzxjz"
	Aug 13 00:05:59 multinode-20210813000359-820289 kubelet[2829]: I0813 00:05:59.334816    2829 topology_manager.go:187] "Topology Admit Handler"
	Aug 13 00:05:59 multinode-20210813000359-820289 kubelet[2829]: I0813 00:05:59.421506    2829 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd2sm\" (UniqueName: \"kubernetes.io/projected/b2d74316-9ad3-435c-85a3-19e862cd06d2-kube-api-access-bd2sm\") pod \"busybox-84b6686758-gpb9d\" (UID: \"b2d74316-9ad3-435c-85a3-19e862cd06d2\") "
	Aug 13 00:06:00 multinode-20210813000359-820289 kubelet[2829]: E0813 00:06:00.691463    2829 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/b2d74316-9ad3-435c-85a3-19e862cd06d2/etc-hosts with error exit status 1" pod="default/busybox-84b6686758-gpb9d"
	
	* 
	* ==> storage-provisioner [30246750ae65a222bbebc82aec9e04ccd550ff645f3c13528cdd844fc3506ed9] <==
	* I0813 00:05:10.721489       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 00:05:10.747816       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 00:05:10.748608       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0813 00:05:10.768795       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0813 00:05:10.771247       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-20210813000359-820289_2ba37811-4ee3-4290-accc-c4e910b69983!
	I0813 00:05:10.780915       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fbaba920-a52e-4a33-827b-b81a17ff6434", APIVersion:"v1", ResourceVersion:"488", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-20210813000359-820289_2ba37811-4ee3-4290-accc-c4e910b69983 became leader
	I0813 00:05:10.872930       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-20210813000359-820289_2ba37811-4ee3-4290-accc-c4e910b69983!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-20210813000359-820289 -n multinode-20210813000359-820289
helpers_test.go:262: (dbg) Run:  kubectl --context multinode-20210813000359-820289 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: 
helpers_test.go:273: ======> post-mortem[TestMultiNode/serial/DeployApp2Nodes]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context multinode-20210813000359-820289 describe pod 
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context multinode-20210813000359-820289 describe pod : exit status 1 (45.808726ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context multinode-20210813000359-820289 describe pod : exit status 1
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (33.63s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (13.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813000359-820289 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:529: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813000359-820289 -- exec busybox-84b6686758-gpb9d -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:537: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813000359-820289 -- exec busybox-84b6686758-gpb9d -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-20210813000359-820289 -- exec busybox-84b6686758-gpb9d -- sh -c "ping -c 1 192.168.39.1": exit status 1 (238.508775ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:538: Failed to ping host (192.168.39.1) from pod (busybox-84b6686758-gpb9d): exit status 1
multinode_test.go:529: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813000359-820289 -- exec busybox-84b6686758-p6fb8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
E0813 00:06:41.535130  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812235933-820289/client.crt: no such file or directory
E0813 00:06:41.540396  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812235933-820289/client.crt: no such file or directory
E0813 00:06:41.550624  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812235933-820289/client.crt: no such file or directory
E0813 00:06:41.570861  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812235933-820289/client.crt: no such file or directory
E0813 00:06:41.611078  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812235933-820289/client.crt: no such file or directory
E0813 00:06:41.691398  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812235933-820289/client.crt: no such file or directory
E0813 00:06:41.851787  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812235933-820289/client.crt: no such file or directory
E0813 00:06:42.172336  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812235933-820289/client.crt: no such file or directory
E0813 00:06:42.813247  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812235933-820289/client.crt: no such file or directory
multinode_test.go:529: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-20210813000359-820289 -- exec busybox-84b6686758-p6fb8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": (10.245674293s)
multinode_test.go:537: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813000359-820289 -- exec busybox-84b6686758-p6fb8 -- sh -c "ping -c 1 <nil>"
multinode_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-20210813000359-820289 -- exec busybox-84b6686758-p6fb8 -- sh -c "ping -c 1 <nil>": exit status 2 (237.03483ms)

                                                
                                                
** stderr ** 
	sh: syntax error: unexpected end of file
	command terminated with exit code 2

                                                
                                                
** /stderr **
multinode_test.go:538: Failed to ping host (<nil>) from pod (busybox-84b6686758-p6fb8): exit status 2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-20210813000359-820289 -n multinode-20210813000359-820289
helpers_test.go:245: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813000359-820289 logs -n 25
E0813 00:06:44.093915  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812235933-820289/client.crt: no such file or directory
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210813000359-820289 logs -n 25: (1.240919411s)
helpers_test.go:253: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-----------------------------------------|----------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                 Profile                 |   User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|-----------------------------------------|----------|---------|-------------------------------|-------------------------------|
	| -p      | functional-20210812235933-820289                  | functional-20210812235933-820289        | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:02:05 UTC | Fri, 13 Aug 2021 00:02:05 UTC |
	|         | ssh sudo cat                                      |                                         |          |         |                               |                               |
	|         | /etc/ssl/certs/3ec20f2e.0                         |                                         |          |         |                               |                               |
	| -p      | functional-20210812235933-820289                  | functional-20210812235933-820289        | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:02:05 UTC | Fri, 13 Aug 2021 00:02:05 UTC |
	|         | version --short                                   |                                         |          |         |                               |                               |
	| -p      | functional-20210812235933-820289                  | functional-20210812235933-820289        | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:02:05 UTC | Fri, 13 Aug 2021 00:02:06 UTC |
	|         | version -o=json --components                      |                                         |          |         |                               |                               |
	| -p      | functional-20210812235933-820289                  | functional-20210812235933-820289        | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:02:06 UTC | Fri, 13 Aug 2021 00:02:06 UTC |
	|         | update-context --alsologtostderr                  |                                         |          |         |                               |                               |
	|         | -v=2                                              |                                         |          |         |                               |                               |
	| -p      | functional-20210812235933-820289                  | functional-20210812235933-820289        | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:02:06 UTC | Fri, 13 Aug 2021 00:02:06 UTC |
	|         | update-context --alsologtostderr                  |                                         |          |         |                               |                               |
	|         | -v=2                                              |                                         |          |         |                               |                               |
	| -p      | functional-20210812235933-820289                  | functional-20210812235933-820289        | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:02:06 UTC | Fri, 13 Aug 2021 00:02:06 UTC |
	|         | update-context --alsologtostderr                  |                                         |          |         |                               |                               |
	|         | -v=2                                              |                                         |          |         |                               |                               |
	| delete  | -p                                                | functional-20210812235933-820289        | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:02:41 UTC | Fri, 13 Aug 2021 00:02:42 UTC |
	|         | functional-20210812235933-820289                  |                                         |          |         |                               |                               |
	| start   | -p                                                | json-output-20210813000242-820289       | testUser | v1.22.0 | Fri, 13 Aug 2021 00:02:42 UTC | Fri, 13 Aug 2021 00:03:48 UTC |
	|         | json-output-20210813000242-820289                 |                                         |          |         |                               |                               |
	|         | --output=json --user=testUser                     |                                         |          |         |                               |                               |
	|         | --memory=2200 --wait=true                         |                                         |          |         |                               |                               |
	|         | --driver=kvm2                                     |                                         |          |         |                               |                               |
	|         | --container-runtime=crio                          |                                         |          |         |                               |                               |
	| pause   | -p                                                | json-output-20210813000242-820289       | testUser | v1.22.0 | Fri, 13 Aug 2021 00:03:48 UTC | Fri, 13 Aug 2021 00:03:49 UTC |
	|         | json-output-20210813000242-820289                 |                                         |          |         |                               |                               |
	|         | --output=json --user=testUser                     |                                         |          |         |                               |                               |
	| unpause | -p                                                | json-output-20210813000242-820289       | testUser | v1.22.0 | Fri, 13 Aug 2021 00:03:49 UTC | Fri, 13 Aug 2021 00:03:50 UTC |
	|         | json-output-20210813000242-820289                 |                                         |          |         |                               |                               |
	|         | --output=json --user=testUser                     |                                         |          |         |                               |                               |
	| stop    | -p                                                | json-output-20210813000242-820289       | testUser | v1.22.0 | Fri, 13 Aug 2021 00:03:50 UTC | Fri, 13 Aug 2021 00:03:58 UTC |
	|         | json-output-20210813000242-820289                 |                                         |          |         |                               |                               |
	|         | --output=json --user=testUser                     |                                         |          |         |                               |                               |
	| delete  | -p                                                | json-output-20210813000242-820289       | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:03:58 UTC | Fri, 13 Aug 2021 00:03:59 UTC |
	|         | json-output-20210813000242-820289                 |                                         |          |         |                               |                               |
	| delete  | -p                                                | json-output-error-20210813000359-820289 | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:03:59 UTC | Fri, 13 Aug 2021 00:03:59 UTC |
	|         | json-output-error-20210813000359-820289           |                                         |          |         |                               |                               |
	| start   | -p                                                | multinode-20210813000359-820289         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:03:59 UTC | Fri, 13 Aug 2021 00:05:58 UTC |
	|         | multinode-20210813000359-820289                   |                                         |          |         |                               |                               |
	|         | --wait=true --memory=2200                         |                                         |          |         |                               |                               |
	|         | --nodes=2 -v=8                                    |                                         |          |         |                               |                               |
	|         | --alsologtostderr --driver=kvm2                   |                                         |          |         |                               |                               |
	|         |  --container-runtime=crio                         |                                         |          |         |                               |                               |
	| kubectl | -p multinode-20210813000359-820289 -- apply -f    | multinode-20210813000359-820289         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:05:58 UTC | Fri, 13 Aug 2021 00:05:59 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                                         |          |         |                               |                               |
	| kubectl | -p                                                | multinode-20210813000359-820289         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:05:59 UTC | Fri, 13 Aug 2021 00:06:03 UTC |
	|         | multinode-20210813000359-820289                   |                                         |          |         |                               |                               |
	|         | -- rollout status                                 |                                         |          |         |                               |                               |
	|         | deployment/busybox                                |                                         |          |         |                               |                               |
	| kubectl | -p multinode-20210813000359-820289                | multinode-20210813000359-820289         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:06:03 UTC | Fri, 13 Aug 2021 00:06:03 UTC |
	|         | -- get pods -o                                    |                                         |          |         |                               |                               |
	|         | jsonpath='{.items[*].status.podIP}'               |                                         |          |         |                               |                               |
	| kubectl | -p multinode-20210813000359-820289                | multinode-20210813000359-820289         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:06:03 UTC | Fri, 13 Aug 2021 00:06:03 UTC |
	|         | -- get pods -o                                    |                                         |          |         |                               |                               |
	|         | jsonpath='{.items[*].metadata.name}'              |                                         |          |         |                               |                               |
	| kubectl | -p                                                | multinode-20210813000359-820289         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:06:03 UTC | Fri, 13 Aug 2021 00:06:03 UTC |
	|         | multinode-20210813000359-820289                   |                                         |          |         |                               |                               |
	|         | -- exec                                           |                                         |          |         |                               |                               |
	|         | busybox-84b6686758-gpb9d --                       |                                         |          |         |                               |                               |
	|         | nslookup kubernetes.io                            |                                         |          |         |                               |                               |
	| kubectl | -p                                                | multinode-20210813000359-820289         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:06:09 UTC | Fri, 13 Aug 2021 00:06:09 UTC |
	|         | multinode-20210813000359-820289                   |                                         |          |         |                               |                               |
	|         | -- exec                                           |                                         |          |         |                               |                               |
	|         | busybox-84b6686758-gpb9d --                       |                                         |          |         |                               |                               |
	|         | nslookup kubernetes.default                       |                                         |          |         |                               |                               |
	| kubectl | -p multinode-20210813000359-820289                | multinode-20210813000359-820289         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:06:19 UTC | Fri, 13 Aug 2021 00:06:19 UTC |
	|         | -- exec busybox-84b6686758-gpb9d                  |                                         |          |         |                               |                               |
	|         | -- nslookup                                       |                                         |          |         |                               |                               |
	|         | kubernetes.default.svc.cluster.local              |                                         |          |         |                               |                               |
	| -p      | multinode-20210813000359-820289                   | multinode-20210813000359-820289         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:06:30 UTC | Fri, 13 Aug 2021 00:06:31 UTC |
	|         | logs -n 25                                        |                                         |          |         |                               |                               |
	| kubectl | -p multinode-20210813000359-820289                | multinode-20210813000359-820289         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:06:32 UTC | Fri, 13 Aug 2021 00:06:32 UTC |
	|         | -- get pods -o                                    |                                         |          |         |                               |                               |
	|         | jsonpath='{.items[*].metadata.name}'              |                                         |          |         |                               |                               |
	| kubectl | -p                                                | multinode-20210813000359-820289         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:06:32 UTC | Fri, 13 Aug 2021 00:06:32 UTC |
	|         | multinode-20210813000359-820289                   |                                         |          |         |                               |                               |
	|         | -- exec                                           |                                         |          |         |                               |                               |
	|         | busybox-84b6686758-gpb9d                          |                                         |          |         |                               |                               |
	|         | -- sh -c nslookup                                 |                                         |          |         |                               |                               |
	|         | host.minikube.internal | awk                      |                                         |          |         |                               |                               |
	|         | 'NR==5' | cut -d' ' -f3                           |                                         |          |         |                               |                               |
	| kubectl | -p                                                | multinode-20210813000359-820289         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:06:32 UTC | Fri, 13 Aug 2021 00:06:43 UTC |
	|         | multinode-20210813000359-820289                   |                                         |          |         |                               |                               |
	|         | -- exec                                           |                                         |          |         |                               |                               |
	|         | busybox-84b6686758-p6fb8                          |                                         |          |         |                               |                               |
	|         | -- sh -c nslookup                                 |                                         |          |         |                               |                               |
	|         | host.minikube.internal | awk                      |                                         |          |         |                               |                               |
	|         | 'NR==5' | cut -d' ' -f3                           |                                         |          |         |                               |                               |
	|---------|---------------------------------------------------|-----------------------------------------|----------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 00:03:59
	Running on machine: debian-jenkins-agent-14
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 00:03:59.496081  826514 out.go:298] Setting OutFile to fd 1 ...
	I0813 00:03:59.496175  826514 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 00:03:59.496185  826514 out.go:311] Setting ErrFile to fd 2...
	I0813 00:03:59.496188  826514 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 00:03:59.496301  826514 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	I0813 00:03:59.496588  826514 out.go:305] Setting JSON to false
	I0813 00:03:59.532184  826514 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":13602,"bootTime":1628799437,"procs":156,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 00:03:59.532432  826514 start.go:121] virtualization: kvm guest
	I0813 00:03:59.535746  826514 out.go:177] * [multinode-20210813000359-820289] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 00:03:59.537206  826514 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0813 00:03:59.535880  826514 notify.go:169] Checking for updates...
	I0813 00:03:59.538704  826514 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 00:03:59.540116  826514 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	I0813 00:03:59.541521  826514 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 00:03:59.541725  826514 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 00:03:59.570289  826514 out.go:177] * Using the kvm2 driver based on user configuration
	I0813 00:03:59.570316  826514 start.go:278] selected driver: kvm2
	I0813 00:03:59.570322  826514 start.go:751] validating driver "kvm2" against <nil>
	I0813 00:03:59.570343  826514 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0813 00:03:59.571390  826514 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 00:03:59.571592  826514 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0813 00:03:59.581901  826514 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0813 00:03:59.581952  826514 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0813 00:03:59.582101  826514 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 00:03:59.582125  826514 cni.go:93] Creating CNI manager for ""
	I0813 00:03:59.582132  826514 cni.go:154] 0 nodes found, recommending kindnet
	I0813 00:03:59.582137  826514 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0813 00:03:59.582146  826514 start_flags.go:277] config:
	{Name:multinode-20210813000359-820289 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:multinode-20210813000359-820289 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0}
	I0813 00:03:59.582286  826514 iso.go:123] acquiring lock: {Name:mk52748db467e5aa4b344902ee09c9ea40467a67 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 00:03:59.584151  826514 out.go:177] * Starting control plane node multinode-20210813000359-820289 in cluster multinode-20210813000359-820289
	I0813 00:03:59.584171  826514 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 00:03:59.584203  826514 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4
	I0813 00:03:59.584230  826514 cache.go:56] Caching tarball of preloaded images
	I0813 00:03:59.584342  826514 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0813 00:03:59.584363  826514 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on crio
	I0813 00:03:59.584683  826514 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/config.json ...
	I0813 00:03:59.584713  826514 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/config.json: {Name:mkec2eec7a60e18f2663b8e1f9d5d73c466c9366 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:03:59.584872  826514 cache.go:205] Successfully downloaded all kic artifacts
	I0813 00:03:59.584900  826514 start.go:313] acquiring machines lock for multinode-20210813000359-820289: {Name:mk2d46e46728943fc604570595bb7616469b4e8e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 00:03:59.584973  826514 start.go:317] acquired machines lock for "multinode-20210813000359-820289" in 33.085µs
	I0813 00:03:59.584999  826514 start.go:89] Provisioning new machine with config: &{Name:multinode-20210813000359-820289 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12122/minikube-v1.22.0-1628238775-12122.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3
ClusterName:multinode-20210813000359-820289 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 00:03:59.585077  826514 start.go:126] createHost starting for "" (driver="kvm2")
	I0813 00:03:59.587049  826514 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0813 00:03:59.587539  826514 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 00:03:59.587579  826514 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 00:03:59.598019  826514 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41123
	I0813 00:03:59.598493  826514 main.go:130] libmachine: () Calling .GetVersion
	I0813 00:03:59.598999  826514 main.go:130] libmachine: Using API Version  1
	I0813 00:03:59.599019  826514 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 00:03:59.599410  826514 main.go:130] libmachine: () Calling .GetMachineName
	I0813 00:03:59.599599  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetMachineName
	I0813 00:03:59.599774  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .DriverName
	I0813 00:03:59.599934  826514 start.go:160] libmachine.API.Create for "multinode-20210813000359-820289" (driver="kvm2")
	I0813 00:03:59.599963  826514 client.go:168] LocalClient.Create starting
	I0813 00:03:59.599995  826514 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem
	I0813 00:03:59.600055  826514 main.go:130] libmachine: Decoding PEM data...
	I0813 00:03:59.600072  826514 main.go:130] libmachine: Parsing certificate...
	I0813 00:03:59.600159  826514 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem
	I0813 00:03:59.600176  826514 main.go:130] libmachine: Decoding PEM data...
	I0813 00:03:59.600186  826514 main.go:130] libmachine: Parsing certificate...
	I0813 00:03:59.600225  826514 main.go:130] libmachine: Running pre-create checks...
	I0813 00:03:59.600234  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .PreCreateCheck
	I0813 00:03:59.600568  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetConfigRaw
	I0813 00:03:59.600985  826514 main.go:130] libmachine: Creating machine...
	I0813 00:03:59.601001  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .Create
	I0813 00:03:59.601148  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Creating KVM machine...
	I0813 00:03:59.603559  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | found existing default KVM network
	I0813 00:03:59.604569  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | I0813 00:03:59.604432  826537 network.go:288] reserving subnet 192.168.39.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.39.0:0xc0000105e0] misses:0}
	I0813 00:03:59.604614  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | I0813 00:03:59.604526  826537 network.go:235] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0813 00:03:59.626089  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | trying to create private KVM network mk-multinode-20210813000359-820289 192.168.39.0/24...
	I0813 00:03:59.848176  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | private KVM network mk-multinode-20210813000359-820289 192.168.39.0/24 created
	I0813 00:03:59.848215  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | I0813 00:03:59.848119  826537 common.go:108] Making disk image using store path: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	I0813 00:03:59.848236  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Setting up store path in /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813000359-820289 ...
	I0813 00:03:59.848286  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Building disk image from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/iso/minikube-v1.22.0-1628238775-12122.iso
	I0813 00:03:59.848312  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Downloading /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/iso/minikube-v1.22.0-1628238775-12122.iso...
	I0813 00:04:00.043272  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | I0813 00:04:00.043093  826537 common.go:115] Creating ssh key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813000359-820289/id_rsa...
	I0813 00:04:00.279344  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | I0813 00:04:00.279234  826537 common.go:121] Creating raw disk image: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813000359-820289/multinode-20210813000359-820289.rawdisk...
	I0813 00:04:00.279381  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | Writing magic tar header
	I0813 00:04:00.279401  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | Writing SSH key tar header
	I0813 00:04:00.279417  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | I0813 00:04:00.279353  826537 common.go:135] Fixing permissions on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813000359-820289 ...
	I0813 00:04:00.279532  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813000359-820289
	I0813 00:04:00.279565  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813000359-820289 (perms=drwx------)
	I0813 00:04:00.279580  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines
	I0813 00:04:00.279598  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	I0813 00:04:00.279611  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b
	I0813 00:04:00.279625  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0813 00:04:00.279662  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines (perms=drwxr-xr-x)
	I0813 00:04:00.279678  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | Checking permissions on dir: /home/jenkins
	I0813 00:04:00.279696  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | Checking permissions on dir: /home
	I0813 00:04:00.279740  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | Skipping /home - not owner
	I0813 00:04:00.279760  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube (perms=drwxr-xr-x)
	I0813 00:04:00.279794  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b (perms=drwxr-xr-x)
	I0813 00:04:00.279812  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxr-xr-x)
	I0813 00:04:00.279829  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0813 00:04:00.279846  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Creating domain...
	I0813 00:04:00.305341  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:96:6f:d8 in network default
	I0813 00:04:00.305868  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:00.305887  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Ensuring networks are active...
	I0813 00:04:00.307990  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Ensuring network default is active
	I0813 00:04:00.308362  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Ensuring network mk-multinode-20210813000359-820289 is active
	I0813 00:04:00.308912  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Getting domain xml...
	I0813 00:04:00.310651  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Creating domain...
	I0813 00:04:00.667613  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Waiting to get IP...
	I0813 00:04:00.668353  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:00.668812  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | unable to find current IP address of domain multinode-20210813000359-820289 in network mk-multinode-20210813000359-820289
	I0813 00:04:00.668884  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | I0813 00:04:00.668781  826537 retry.go:31] will retry after 263.082536ms: waiting for machine to come up
	I0813 00:04:00.933252  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:00.933785  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | unable to find current IP address of domain multinode-20210813000359-820289 in network mk-multinode-20210813000359-820289
	I0813 00:04:00.933817  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | I0813 00:04:00.933734  826537 retry.go:31] will retry after 381.329545ms: waiting for machine to come up
	I0813 00:04:01.316181  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:01.316693  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | unable to find current IP address of domain multinode-20210813000359-820289 in network mk-multinode-20210813000359-820289
	I0813 00:04:01.316718  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | I0813 00:04:01.316647  826537 retry.go:31] will retry after 422.765636ms: waiting for machine to come up
	I0813 00:04:01.741434  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:01.741860  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | unable to find current IP address of domain multinode-20210813000359-820289 in network mk-multinode-20210813000359-820289
	I0813 00:04:01.741887  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | I0813 00:04:01.741836  826537 retry.go:31] will retry after 473.074753ms: waiting for machine to come up
	I0813 00:04:02.216381  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:02.216916  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | unable to find current IP address of domain multinode-20210813000359-820289 in network mk-multinode-20210813000359-820289
	I0813 00:04:02.216956  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | I0813 00:04:02.216864  826537 retry.go:31] will retry after 587.352751ms: waiting for machine to come up
	I0813 00:04:02.805656  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:02.806132  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | unable to find current IP address of domain multinode-20210813000359-820289 in network mk-multinode-20210813000359-820289
	I0813 00:04:02.806160  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | I0813 00:04:02.806086  826537 retry.go:31] will retry after 834.206799ms: waiting for machine to come up
	I0813 00:04:03.642024  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:03.642483  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | unable to find current IP address of domain multinode-20210813000359-820289 in network mk-multinode-20210813000359-820289
	I0813 00:04:03.642508  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | I0813 00:04:03.642436  826537 retry.go:31] will retry after 746.553905ms: waiting for machine to come up
	I0813 00:04:04.390259  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:04.390717  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | unable to find current IP address of domain multinode-20210813000359-820289 in network mk-multinode-20210813000359-820289
	I0813 00:04:04.390743  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | I0813 00:04:04.390667  826537 retry.go:31] will retry after 987.362415ms: waiting for machine to come up
	I0813 00:04:05.379227  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:05.379784  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | unable to find current IP address of domain multinode-20210813000359-820289 in network mk-multinode-20210813000359-820289
	I0813 00:04:05.379812  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | I0813 00:04:05.379696  826537 retry.go:31] will retry after 1.189835008s: waiting for machine to come up
	I0813 00:04:06.570567  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:06.570986  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | unable to find current IP address of domain multinode-20210813000359-820289 in network mk-multinode-20210813000359-820289
	I0813 00:04:06.571022  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | I0813 00:04:06.570935  826537 retry.go:31] will retry after 1.677229867s: waiting for machine to come up
	I0813 00:04:08.250638  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:08.251111  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | unable to find current IP address of domain multinode-20210813000359-820289 in network mk-multinode-20210813000359-820289
	I0813 00:04:08.251136  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | I0813 00:04:08.251062  826537 retry.go:31] will retry after 2.346016261s: waiting for machine to come up
	I0813 00:04:10.598966  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:10.599428  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | unable to find current IP address of domain multinode-20210813000359-820289 in network mk-multinode-20210813000359-820289
	I0813 00:04:10.599459  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | I0813 00:04:10.599387  826537 retry.go:31] will retry after 3.36678925s: waiting for machine to come up
	I0813 00:04:13.967189  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:13.967680  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has current primary IP address 192.168.39.22 and MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:13.967732  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Found IP for machine: 192.168.39.22
	I0813 00:04:13.967752  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Reserving static IP address...
	I0813 00:04:13.968111  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | unable to find host DHCP lease matching {name: "multinode-20210813000359-820289", mac: "52:54:00:b5:e4:55", ip: "192.168.39.22"} in network mk-multinode-20210813000359-820289
	I0813 00:04:14.016184  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | Getting to WaitForSSH function...
	I0813 00:04:14.016216  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Reserved static IP address: 192.168.39.22
	I0813 00:04:14.016232  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Waiting for SSH to be available...
	I0813 00:04:14.021092  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:14.021436  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e4:55", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:04:13 +0000 UTC Type:0 Mac:52:54:00:b5:e4:55 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b5:e4:55}
	I0813 00:04:14.021461  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined IP address 192.168.39.22 and MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:14.021579  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | Using SSH client type: external
	I0813 00:04:14.021611  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813000359-820289/id_rsa (-rw-------)
	I0813 00:04:14.021659  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.22 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813000359-820289/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0813 00:04:14.021677  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | About to run SSH command:
	I0813 00:04:14.021706  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | exit 0
	I0813 00:04:14.151163  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | SSH cmd err, output: <nil>: 
	I0813 00:04:14.151557  826514 main.go:130] libmachine: (multinode-20210813000359-820289) KVM machine creation complete!
	I0813 00:04:14.151637  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetConfigRaw
	I0813 00:04:14.152186  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .DriverName
	I0813 00:04:14.152397  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .DriverName
	I0813 00:04:14.152630  826514 main.go:130] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0813 00:04:14.152647  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetState
	I0813 00:04:14.155202  826514 main.go:130] libmachine: Detecting operating system of created instance...
	I0813 00:04:14.155218  826514 main.go:130] libmachine: Waiting for SSH to be available...
	I0813 00:04:14.155225  826514 main.go:130] libmachine: Getting to WaitForSSH function...
	I0813 00:04:14.155231  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHHostname
	I0813 00:04:14.159768  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:14.160079  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e4:55", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:04:13 +0000 UTC Type:0 Mac:52:54:00:b5:e4:55 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:multinode-20210813000359-820289 Clientid:01:52:54:00:b5:e4:55}
	I0813 00:04:14.160112  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined IP address 192.168.39.22 and MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:14.160215  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHPort
	I0813 00:04:14.160394  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHKeyPath
	I0813 00:04:14.160525  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHKeyPath
	I0813 00:04:14.160635  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHUsername
	I0813 00:04:14.160826  826514 main.go:130] libmachine: Using SSH client type: native
	I0813 00:04:14.161035  826514 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0813 00:04:14.161052  826514 main.go:130] libmachine: About to run SSH command:
	exit 0
	I0813 00:04:14.270761  826514 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 00:04:14.270785  826514 main.go:130] libmachine: Detecting the provisioner...
	I0813 00:04:14.270793  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHHostname
	I0813 00:04:14.276005  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:14.276321  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e4:55", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:04:13 +0000 UTC Type:0 Mac:52:54:00:b5:e4:55 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:multinode-20210813000359-820289 Clientid:01:52:54:00:b5:e4:55}
	I0813 00:04:14.276357  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined IP address 192.168.39.22 and MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:14.276551  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHPort
	I0813 00:04:14.276749  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHKeyPath
	I0813 00:04:14.276918  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHKeyPath
	I0813 00:04:14.277089  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHUsername
	I0813 00:04:14.277258  826514 main.go:130] libmachine: Using SSH client type: native
	I0813 00:04:14.277400  826514 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0813 00:04:14.277411  826514 main.go:130] libmachine: About to run SSH command:
	cat /etc/os-release
	I0813 00:04:14.388249  826514 main.go:130] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2020.02.12
	ID=buildroot
	VERSION_ID=2020.02.12
	PRETTY_NAME="Buildroot 2020.02.12"
	
	I0813 00:04:14.388309  826514 main.go:130] libmachine: found compatible host: buildroot
	I0813 00:04:14.388319  826514 main.go:130] libmachine: Provisioning with buildroot...
	I0813 00:04:14.388328  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetMachineName
	I0813 00:04:14.388579  826514 buildroot.go:166] provisioning hostname "multinode-20210813000359-820289"
	I0813 00:04:14.388608  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetMachineName
	I0813 00:04:14.388769  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHHostname
	I0813 00:04:14.393460  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:14.393774  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e4:55", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:04:13 +0000 UTC Type:0 Mac:52:54:00:b5:e4:55 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:multinode-20210813000359-820289 Clientid:01:52:54:00:b5:e4:55}
	I0813 00:04:14.393807  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined IP address 192.168.39.22 and MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:14.393876  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHPort
	I0813 00:04:14.394042  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHKeyPath
	I0813 00:04:14.394165  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHKeyPath
	I0813 00:04:14.394266  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHUsername
	I0813 00:04:14.394436  826514 main.go:130] libmachine: Using SSH client type: native
	I0813 00:04:14.394581  826514 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0813 00:04:14.394600  826514 main.go:130] libmachine: About to run SSH command:
	sudo hostname multinode-20210813000359-820289 && echo "multinode-20210813000359-820289" | sudo tee /etc/hostname
	I0813 00:04:14.513606  826514 main.go:130] libmachine: SSH cmd err, output: <nil>: multinode-20210813000359-820289
	
	I0813 00:04:14.513629  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHHostname
	I0813 00:04:14.518489  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:14.518820  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e4:55", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:04:13 +0000 UTC Type:0 Mac:52:54:00:b5:e4:55 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:multinode-20210813000359-820289 Clientid:01:52:54:00:b5:e4:55}
	I0813 00:04:14.518844  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined IP address 192.168.39.22 and MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:14.518980  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHPort
	I0813 00:04:14.519153  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHKeyPath
	I0813 00:04:14.519314  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHKeyPath
	I0813 00:04:14.519454  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHUsername
	I0813 00:04:14.519614  826514 main.go:130] libmachine: Using SSH client type: native
	I0813 00:04:14.519795  826514 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0813 00:04:14.519819  826514 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-20210813000359-820289' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-20210813000359-820289/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-20210813000359-820289' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 00:04:14.637521  826514 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 00:04:14.637545  826514 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem ServerCertRemotePath
:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube}
	I0813 00:04:14.637591  826514 buildroot.go:174] setting up certificates
	I0813 00:04:14.637602  826514 provision.go:83] configureAuth start
	I0813 00:04:14.637624  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetMachineName
	I0813 00:04:14.637810  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetIP
	I0813 00:04:14.642593  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:14.642897  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e4:55", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:04:13 +0000 UTC Type:0 Mac:52:54:00:b5:e4:55 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:multinode-20210813000359-820289 Clientid:01:52:54:00:b5:e4:55}
	I0813 00:04:14.642919  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined IP address 192.168.39.22 and MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:14.643011  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHHostname
	I0813 00:04:14.647090  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:14.647337  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e4:55", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:04:13 +0000 UTC Type:0 Mac:52:54:00:b5:e4:55 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:multinode-20210813000359-820289 Clientid:01:52:54:00:b5:e4:55}
	I0813 00:04:14.647370  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined IP address 192.168.39.22 and MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:14.647425  826514 provision.go:137] copyHostCerts
	I0813 00:04:14.647454  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem
	I0813 00:04:14.647492  826514 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem, removing ...
	I0813 00:04:14.647502  826514 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem
	I0813 00:04:14.647555  826514 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem (1679 bytes)
	I0813 00:04:14.647614  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem
	I0813 00:04:14.647636  826514 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem, removing ...
	I0813 00:04:14.647641  826514 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem
	I0813 00:04:14.647661  826514 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem (1078 bytes)
	I0813 00:04:14.647762  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem
	I0813 00:04:14.647787  826514 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem, removing ...
	I0813 00:04:14.647794  826514 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem
	I0813 00:04:14.647818  826514 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem (1123 bytes)
	I0813 00:04:14.647864  826514 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem org=jenkins.multinode-20210813000359-820289 san=[192.168.39.22 192.168.39.22 localhost 127.0.0.1 minikube multinode-20210813000359-820289]
	I0813 00:04:14.939227  826514 provision.go:171] copyRemoteCerts
	I0813 00:04:14.939287  826514 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 00:04:14.939317  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHHostname
	I0813 00:04:14.944061  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:14.944333  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e4:55", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:04:13 +0000 UTC Type:0 Mac:52:54:00:b5:e4:55 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:multinode-20210813000359-820289 Clientid:01:52:54:00:b5:e4:55}
	I0813 00:04:14.944368  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined IP address 192.168.39.22 and MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:14.944478  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHPort
	I0813 00:04:14.944674  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHKeyPath
	I0813 00:04:14.944821  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHUsername
	I0813 00:04:14.944950  826514 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813000359-820289/id_rsa Username:docker}
	I0813 00:04:15.026924  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0813 00:04:15.026968  826514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 00:04:15.042656  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0813 00:04:15.042713  826514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0813 00:04:15.058311  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0813 00:04:15.058359  826514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 00:04:15.074148  826514 provision.go:86] duration metric: configureAuth took 436.531554ms
	I0813 00:04:15.074173  826514 buildroot.go:189] setting minikube options for container-runtime
	I0813 00:04:15.074488  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHHostname
	I0813 00:04:15.080330  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:15.080752  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e4:55", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:04:13 +0000 UTC Type:0 Mac:52:54:00:b5:e4:55 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:multinode-20210813000359-820289 Clientid:01:52:54:00:b5:e4:55}
	I0813 00:04:15.080796  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined IP address 192.168.39.22 and MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:15.080943  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHPort
	I0813 00:04:15.081128  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHKeyPath
	I0813 00:04:15.081239  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHKeyPath
	I0813 00:04:15.081375  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHUsername
	I0813 00:04:15.081520  826514 main.go:130] libmachine: Using SSH client type: native
	I0813 00:04:15.081653  826514 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0813 00:04:15.081668  826514 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 00:04:15.760734  826514 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 00:04:15.760791  826514 main.go:130] libmachine: Checking connection to Docker...
	I0813 00:04:15.760802  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetURL
	I0813 00:04:15.763347  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | Using libvirt version 3000000
	I0813 00:04:15.767747  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:15.768070  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e4:55", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:04:13 +0000 UTC Type:0 Mac:52:54:00:b5:e4:55 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:multinode-20210813000359-820289 Clientid:01:52:54:00:b5:e4:55}
	I0813 00:04:15.768106  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined IP address 192.168.39.22 and MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:15.768262  826514 main.go:130] libmachine: Docker is up and running!
	I0813 00:04:15.768284  826514 main.go:130] libmachine: Reticulating splines...
	I0813 00:04:15.768292  826514 client.go:171] LocalClient.Create took 16.168318436s
	I0813 00:04:15.768310  826514 start.go:168] duration metric: libmachine.API.Create for "multinode-20210813000359-820289" took 16.168377925s
	I0813 00:04:15.768321  826514 start.go:267] post-start starting for "multinode-20210813000359-820289" (driver="kvm2")
	I0813 00:04:15.768327  826514 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 00:04:15.768345  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .DriverName
	I0813 00:04:15.768551  826514 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 00:04:15.768582  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHHostname
	I0813 00:04:15.772905  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:15.773227  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e4:55", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:04:13 +0000 UTC Type:0 Mac:52:54:00:b5:e4:55 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:multinode-20210813000359-820289 Clientid:01:52:54:00:b5:e4:55}
	I0813 00:04:15.773258  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined IP address 192.168.39.22 and MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:15.773366  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHPort
	I0813 00:04:15.773535  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHKeyPath
	I0813 00:04:15.773688  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHUsername
	I0813 00:04:15.773815  826514 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813000359-820289/id_rsa Username:docker}
	I0813 00:04:15.854725  826514 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 00:04:15.859087  826514 command_runner.go:124] > NAME=Buildroot
	I0813 00:04:15.859105  826514 command_runner.go:124] > VERSION=2020.02.12
	I0813 00:04:15.859111  826514 command_runner.go:124] > ID=buildroot
	I0813 00:04:15.859118  826514 command_runner.go:124] > VERSION_ID=2020.02.12
	I0813 00:04:15.859125  826514 command_runner.go:124] > PRETTY_NAME="Buildroot 2020.02.12"
	I0813 00:04:15.859470  826514 info.go:137] Remote host: Buildroot 2020.02.12
	I0813 00:04:15.859490  826514 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/addons for local assets ...
	I0813 00:04:15.859539  826514 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files for local assets ...
	I0813 00:04:15.859659  826514 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/8202892.pem -> 8202892.pem in /etc/ssl/certs
	I0813 00:04:15.859671  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/8202892.pem -> /etc/ssl/certs/8202892.pem
	I0813 00:04:15.859795  826514 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 00:04:15.866931  826514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/8202892.pem --> /etc/ssl/certs/8202892.pem (1708 bytes)
	I0813 00:04:15.882819  826514 start.go:270] post-start completed in 114.487613ms
	I0813 00:04:15.882861  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetConfigRaw
	I0813 00:04:15.883436  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetIP
	I0813 00:04:15.888100  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:15.888415  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e4:55", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:04:13 +0000 UTC Type:0 Mac:52:54:00:b5:e4:55 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:multinode-20210813000359-820289 Clientid:01:52:54:00:b5:e4:55}
	I0813 00:04:15.888445  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined IP address 192.168.39.22 and MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:15.888637  826514 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/config.json ...
	I0813 00:04:15.888801  826514 start.go:129] duration metric: createHost completed in 16.303716534s
	I0813 00:04:15.888815  826514 start.go:80] releasing machines lock for "multinode-20210813000359-820289", held for 16.303832114s
	I0813 00:04:15.888846  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .DriverName
	I0813 00:04:15.889026  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetIP
	I0813 00:04:15.893253  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:15.893508  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e4:55", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:04:13 +0000 UTC Type:0 Mac:52:54:00:b5:e4:55 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:multinode-20210813000359-820289 Clientid:01:52:54:00:b5:e4:55}
	I0813 00:04:15.893543  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined IP address 192.168.39.22 and MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:15.893685  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .DriverName
	I0813 00:04:15.893866  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .DriverName
	I0813 00:04:15.894287  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .DriverName
	I0813 00:04:15.894476  826514 ssh_runner.go:149] Run: systemctl --version
	I0813 00:04:15.894502  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHHostname
	I0813 00:04:15.894536  826514 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 00:04:15.894579  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHHostname
	I0813 00:04:15.898888  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:15.899229  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e4:55", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:04:13 +0000 UTC Type:0 Mac:52:54:00:b5:e4:55 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:multinode-20210813000359-820289 Clientid:01:52:54:00:b5:e4:55}
	I0813 00:04:15.899260  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined IP address 192.168.39.22 and MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:15.899355  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHPort
	I0813 00:04:15.899505  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHKeyPath
	I0813 00:04:15.899638  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHUsername
	I0813 00:04:15.899773  826514 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813000359-820289/id_rsa Username:docker}
	I0813 00:04:15.900004  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:15.900298  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e4:55", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:04:13 +0000 UTC Type:0 Mac:52:54:00:b5:e4:55 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:multinode-20210813000359-820289 Clientid:01:52:54:00:b5:e4:55}
	I0813 00:04:15.900321  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined IP address 192.168.39.22 and MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:15.900502  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHPort
	I0813 00:04:15.900688  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHKeyPath
	I0813 00:04:15.900837  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHUsername
	I0813 00:04:15.900965  826514 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813000359-820289/id_rsa Username:docker}
	I0813 00:04:15.981023  826514 command_runner.go:124] > systemd 244 (244)
	I0813 00:04:15.981064  826514 command_runner.go:124] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK +SYSVINIT +UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0813 00:04:15.981095  826514 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 00:04:15.981185  826514 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 00:04:16.007266  826514 command_runner.go:124] > <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
	I0813 00:04:16.007294  826514 command_runner.go:124] > <TITLE>302 Moved</TITLE></HEAD><BODY>
	I0813 00:04:16.007304  826514 command_runner.go:124] > <H1>302 Moved</H1>
	I0813 00:04:16.007312  826514 command_runner.go:124] > The document has moved
	I0813 00:04:16.007323  826514 command_runner.go:124] > <A HREF="https://cloud.google.com/container-registry/">here</A>.
	I0813 00:04:16.007336  826514 command_runner.go:124] > </BODY></HTML>
	I0813 00:04:16.007461  826514 command_runner.go:124] ! time="2021-08-13T00:04:15Z" level=warning msg="image connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	I0813 00:04:17.989317  826514 command_runner.go:124] ! time="2021-08-13T00:04:17Z" level=error msg="connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	I0813 00:04:19.976052  826514 command_runner.go:124] ! time="2021-08-13T00:04:19Z" level=error msg="connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	I0813 00:04:19.980765  826514 command_runner.go:124] > {
	I0813 00:04:19.980782  826514 command_runner.go:124] >   "images": [
	I0813 00:04:19.980787  826514 command_runner.go:124] >   ]
	I0813 00:04:19.980791  826514 command_runner.go:124] > }
	I0813 00:04:19.980810  826514 ssh_runner.go:189] Completed: sudo crictl images --output json: (3.999609413s)
	I0813 00:04:19.980902  826514 crio.go:420] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.21.3". assuming images are not preloaded.
	I0813 00:04:19.980950  826514 ssh_runner.go:149] Run: which lz4
	I0813 00:04:19.984746  826514 command_runner.go:124] > /bin/lz4
	I0813 00:04:19.984881  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0813 00:04:19.984969  826514 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0813 00:04:19.989446  826514 command_runner.go:124] ! stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0813 00:04:19.990091  826514 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0813 00:04:19.990122  826514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (576184326 bytes)
	I0813 00:04:22.875606  826514 crio.go:362] Took 2.890671 seconds to copy over tarball
	I0813 00:04:22.875677  826514 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0813 00:04:27.289558  826514 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (4.413845256s)
	I0813 00:04:27.289590  826514 crio.go:369] Took 4.413952 seconds t extract the tarball
	I0813 00:04:27.289604  826514 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0813 00:04:27.328003  826514 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 00:04:27.340687  826514 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 00:04:27.350625  826514 docker.go:153] disabling docker service ...
	I0813 00:04:27.350666  826514 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 00:04:27.361626  826514 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 00:04:27.372036  826514 command_runner.go:124] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0813 00:04:27.372382  826514 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 00:04:27.495882  826514 command_runner.go:124] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0813 00:04:27.495941  826514 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 00:04:27.630479  826514 command_runner.go:124] ! Unit docker.service does not exist, proceeding anyway.
	I0813 00:04:27.630512  826514 command_runner.go:124] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0813 00:04:27.630566  826514 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 00:04:27.640371  826514 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 00:04:27.652853  826514 command_runner.go:124] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0813 00:04:27.652867  826514 command_runner.go:124] > image-endpoint: unix:///var/run/crio/crio.sock
	I0813 00:04:27.653303  826514 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0813 00:04:27.660562  826514 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0813 00:04:27.660581  826514 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0813 00:04:27.668141  826514 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 00:04:27.674532  826514 command_runner.go:124] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 00:04:27.674885  826514 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 00:04:27.674939  826514 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 00:04:27.689306  826514 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 00:04:27.695945  826514 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 00:04:27.822191  826514 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 00:04:27.959982  826514 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 00:04:27.960051  826514 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 00:04:27.964993  826514 command_runner.go:124] >   File: /var/run/crio/crio.sock
	I0813 00:04:27.965015  826514 command_runner.go:124] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0813 00:04:27.965022  826514 command_runner.go:124] > Device: 14h/20d	Inode: 29936       Links: 1
	I0813 00:04:27.965029  826514 command_runner.go:124] > Access: (0755/srwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0813 00:04:27.965034  826514 command_runner.go:124] > Access: 2021-08-13 00:04:19.926943447 +0000
	I0813 00:04:27.965043  826514 command_runner.go:124] > Modify: 2021-08-13 00:04:15.656484954 +0000
	I0813 00:04:27.965051  826514 command_runner.go:124] > Change: 2021-08-13 00:04:15.656484954 +0000
	I0813 00:04:27.965057  826514 command_runner.go:124] >  Birth: -
	I0813 00:04:27.965289  826514 start.go:417] Will wait 60s for crictl version
	I0813 00:04:27.965344  826514 ssh_runner.go:149] Run: sudo crictl version
	I0813 00:04:27.995939  826514 command_runner.go:124] > Version:  0.1.0
	I0813 00:04:27.995957  826514 command_runner.go:124] > RuntimeName:  cri-o
	I0813 00:04:27.995961  826514 command_runner.go:124] > RuntimeVersion:  1.20.2
	I0813 00:04:27.995967  826514 command_runner.go:124] > RuntimeApiVersion:  v1alpha1
	I0813 00:04:27.996046  826514 start.go:426] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.2
	RuntimeApiVersion:  v1alpha1
	I0813 00:04:27.996116  826514 ssh_runner.go:149] Run: crio --version
	I0813 00:04:28.101692  826514 command_runner.go:124] > crio version 1.20.2
	I0813 00:04:28.101721  826514 command_runner.go:124] > Version:       1.20.2
	I0813 00:04:28.101728  826514 command_runner.go:124] > GitCommit:     d5a999ad0a35d895ded554e1e18c142075501a98
	I0813 00:04:28.101733  826514 command_runner.go:124] > GitTreeState:  clean
	I0813 00:04:28.101740  826514 command_runner.go:124] > BuildDate:     2021-08-06T09:19:16Z
	I0813 00:04:28.101748  826514 command_runner.go:124] > GoVersion:     go1.13.15
	I0813 00:04:28.101754  826514 command_runner.go:124] > Compiler:      gc
	I0813 00:04:28.101761  826514 command_runner.go:124] > Platform:      linux/amd64
	I0813 00:04:28.103227  826514 command_runner.go:124] ! time="2021-08-13T00:04:28Z" level=info msg="Starting CRI-O, version: 1.20.2, git: d5a999ad0a35d895ded554e1e18c142075501a98(clean)"
	I0813 00:04:28.103308  826514 ssh_runner.go:149] Run: crio --version
	I0813 00:04:28.371922  826514 command_runner.go:124] > crio version 1.20.2
	I0813 00:04:28.371946  826514 command_runner.go:124] > Version:       1.20.2
	I0813 00:04:28.371954  826514 command_runner.go:124] > GitCommit:     d5a999ad0a35d895ded554e1e18c142075501a98
	I0813 00:04:28.371958  826514 command_runner.go:124] > GitTreeState:  clean
	I0813 00:04:28.371964  826514 command_runner.go:124] > BuildDate:     2021-08-06T09:19:16Z
	I0813 00:04:28.371968  826514 command_runner.go:124] > GoVersion:     go1.13.15
	I0813 00:04:28.371972  826514 command_runner.go:124] > Compiler:      gc
	I0813 00:04:28.371977  826514 command_runner.go:124] > Platform:      linux/amd64
	I0813 00:04:28.373487  826514 command_runner.go:124] ! time="2021-08-13T00:04:28Z" level=info msg="Starting CRI-O, version: 1.20.2, git: d5a999ad0a35d895ded554e1e18c142075501a98(clean)"
	I0813 00:04:30.488898  826514 out.go:177] * Preparing Kubernetes v1.21.3 on CRI-O 1.20.2 ...
	I0813 00:04:30.489022  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetIP
	I0813 00:04:31.496831  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:31.497127  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e4:55", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:04:13 +0000 UTC Type:0 Mac:52:54:00:b5:e4:55 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:multinode-20210813000359-820289 Clientid:01:52:54:00:b5:e4:55}
	I0813 00:04:31.497164  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined IP address 192.168.39.22 and MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:04:31.497356  826514 ssh_runner.go:149] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0813 00:04:31.502501  826514 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 00:04:31.513590  826514 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 00:04:31.513650  826514 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 00:04:31.586120  826514 command_runner.go:124] > {
	I0813 00:04:31.586143  826514 command_runner.go:124] >   "images": [
	I0813 00:04:31.586148  826514 command_runner.go:124] >     {
	I0813 00:04:31.586156  826514 command_runner.go:124] >       "id": "6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb",
	I0813 00:04:31.586161  826514 command_runner.go:124] >       "repoTags": [
	I0813 00:04:31.586167  826514 command_runner.go:124] >         "docker.io/kindest/kindnetd:v20210326-1e038dc5"
	I0813 00:04:31.586171  826514 command_runner.go:124] >       ],
	I0813 00:04:31.586176  826514 command_runner.go:124] >       "repoDigests": [
	I0813 00:04:31.586186  826514 command_runner.go:124] >         "docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1",
	I0813 00:04:31.586196  826514 command_runner.go:124] >         "docker.io/kindest/kindnetd@sha256:838bc1706e38391aefaa31fd52619fe8e57ad3dfb0d0ff414d902367fcc24c3c"
	I0813 00:04:31.586200  826514 command_runner.go:124] >       ],
	I0813 00:04:31.586205  826514 command_runner.go:124] >       "size": "119984626",
	I0813 00:04:31.586209  826514 command_runner.go:124] >       "uid": null,
	I0813 00:04:31.586213  826514 command_runner.go:124] >       "username": "",
	I0813 00:04:31.586224  826514 command_runner.go:124] >       "spec": null
	I0813 00:04:31.586232  826514 command_runner.go:124] >     },
	I0813 00:04:31.586236  826514 command_runner.go:124] >     {
	I0813 00:04:31.586243  826514 command_runner.go:124] >       "id": "9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db",
	I0813 00:04:31.586248  826514 command_runner.go:124] >       "repoTags": [
	I0813 00:04:31.586254  826514 command_runner.go:124] >         "docker.io/kubernetesui/dashboard:v2.1.0"
	I0813 00:04:31.586260  826514 command_runner.go:124] >       ],
	I0813 00:04:31.586264  826514 command_runner.go:124] >       "repoDigests": [
	I0813 00:04:31.586274  826514 command_runner.go:124] >         "docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f",
	I0813 00:04:31.586284  826514 command_runner.go:124] >         "docker.io/kubernetesui/dashboard@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6"
	I0813 00:04:31.586288  826514 command_runner.go:124] >       ],
	I0813 00:04:31.586293  826514 command_runner.go:124] >       "size": "228528983",
	I0813 00:04:31.586296  826514 command_runner.go:124] >       "uid": null,
	I0813 00:04:31.586301  826514 command_runner.go:124] >       "username": "nonroot",
	I0813 00:04:31.586308  826514 command_runner.go:124] >       "spec": null
	I0813 00:04:31.586312  826514 command_runner.go:124] >     },
	I0813 00:04:31.586316  826514 command_runner.go:124] >     {
	I0813 00:04:31.586322  826514 command_runner.go:124] >       "id": "86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4",
	I0813 00:04:31.586327  826514 command_runner.go:124] >       "repoTags": [
	I0813 00:04:31.586333  826514 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper:v1.0.4"
	I0813 00:04:31.586340  826514 command_runner.go:124] >       ],
	I0813 00:04:31.586345  826514 command_runner.go:124] >       "repoDigests": [
	I0813 00:04:31.586353  826514 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper@sha256:555981a24f184420f3be0c79d4efb6c948a85cfce84034f85a563f4151a81cbf",
	I0813 00:04:31.586364  826514 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper@sha256:d78f995c07124874c2a2e9b404cffa6bc6233668d63d6c6210574971f3d5914b"
	I0813 00:04:31.586367  826514 command_runner.go:124] >       ],
	I0813 00:04:31.586372  826514 command_runner.go:124] >       "size": "36950651",
	I0813 00:04:31.586376  826514 command_runner.go:124] >       "uid": null,
	I0813 00:04:31.586380  826514 command_runner.go:124] >       "username": "",
	I0813 00:04:31.586386  826514 command_runner.go:124] >       "spec": null
	I0813 00:04:31.586389  826514 command_runner.go:124] >     },
	I0813 00:04:31.586393  826514 command_runner.go:124] >     {
	I0813 00:04:31.586399  826514 command_runner.go:124] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0813 00:04:31.586406  826514 command_runner.go:124] >       "repoTags": [
	I0813 00:04:31.586411  826514 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0813 00:04:31.586414  826514 command_runner.go:124] >       ],
	I0813 00:04:31.586418  826514 command_runner.go:124] >       "repoDigests": [
	I0813 00:04:31.586428  826514 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0813 00:04:31.586437  826514 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0813 00:04:31.586442  826514 command_runner.go:124] >       ],
	I0813 00:04:31.586447  826514 command_runner.go:124] >       "size": "31470524",
	I0813 00:04:31.586454  826514 command_runner.go:124] >       "uid": null,
	I0813 00:04:31.586459  826514 command_runner.go:124] >       "username": "",
	I0813 00:04:31.586463  826514 command_runner.go:124] >       "spec": null
	I0813 00:04:31.586466  826514 command_runner.go:124] >     },
	I0813 00:04:31.586470  826514 command_runner.go:124] >     {
	I0813 00:04:31.586476  826514 command_runner.go:124] >       "id": "296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899",
	I0813 00:04:31.586481  826514 command_runner.go:124] >       "repoTags": [
	I0813 00:04:31.586487  826514 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns:v1.8.0"
	I0813 00:04:31.586491  826514 command_runner.go:124] >       ],
	I0813 00:04:31.586495  826514 command_runner.go:124] >       "repoDigests": [
	I0813 00:04:31.586503  826514 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61",
	I0813 00:04:31.586513  826514 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e"
	I0813 00:04:31.586517  826514 command_runner.go:124] >       ],
	I0813 00:04:31.586521  826514 command_runner.go:124] >       "size": "42585056",
	I0813 00:04:31.586525  826514 command_runner.go:124] >       "uid": null,
	I0813 00:04:31.586529  826514 command_runner.go:124] >       "username": "",
	I0813 00:04:31.586534  826514 command_runner.go:124] >       "spec": null
	I0813 00:04:31.586538  826514 command_runner.go:124] >     },
	I0813 00:04:31.586542  826514 command_runner.go:124] >     {
	I0813 00:04:31.586548  826514 command_runner.go:124] >       "id": "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934",
	I0813 00:04:31.586554  826514 command_runner.go:124] >       "repoTags": [
	I0813 00:04:31.586558  826514 command_runner.go:124] >         "k8s.gcr.io/etcd:3.4.13-0"
	I0813 00:04:31.586563  826514 command_runner.go:124] >       ],
	I0813 00:04:31.586566  826514 command_runner.go:124] >       "repoDigests": [
	I0813 00:04:31.586574  826514 command_runner.go:124] >         "k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2",
	I0813 00:04:31.586581  826514 command_runner.go:124] >         "k8s.gcr.io/etcd@sha256:bd4d2c9a19be8a492bc79df53eee199fd04b415e9993eb69f7718052602a147a"
	I0813 00:04:31.586585  826514 command_runner.go:124] >       ],
	I0813 00:04:31.586589  826514 command_runner.go:124] >       "size": "254662613",
	I0813 00:04:31.586597  826514 command_runner.go:124] >       "uid": null,
	I0813 00:04:31.586601  826514 command_runner.go:124] >       "username": "",
	I0813 00:04:31.586607  826514 command_runner.go:124] >       "spec": null
	I0813 00:04:31.586610  826514 command_runner.go:124] >     },
	I0813 00:04:31.586613  826514 command_runner.go:124] >     {
	I0813 00:04:31.586619  826514 command_runner.go:124] >       "id": "3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80",
	I0813 00:04:31.586626  826514 command_runner.go:124] >       "repoTags": [
	I0813 00:04:31.586630  826514 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver:v1.21.3"
	I0813 00:04:31.586634  826514 command_runner.go:124] >       ],
	I0813 00:04:31.586638  826514 command_runner.go:124] >       "repoDigests": [
	I0813 00:04:31.586645  826514 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2",
	I0813 00:04:31.586653  826514 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver@sha256:910cfdf034262c7b68ecb17c0885f39bdaaad07d87c9a5b6320819d8500b7ee5"
	I0813 00:04:31.586657  826514 command_runner.go:124] >       ],
	I0813 00:04:31.586661  826514 command_runner.go:124] >       "size": "126878961",
	I0813 00:04:31.586666  826514 command_runner.go:124] >       "uid": {
	I0813 00:04:31.586670  826514 command_runner.go:124] >         "value": "0"
	I0813 00:04:31.586674  826514 command_runner.go:124] >       },
	I0813 00:04:31.586678  826514 command_runner.go:124] >       "username": "",
	I0813 00:04:31.586682  826514 command_runner.go:124] >       "spec": null
	I0813 00:04:31.586685  826514 command_runner.go:124] >     },
	I0813 00:04:31.586688  826514 command_runner.go:124] >     {
	I0813 00:04:31.586695  826514 command_runner.go:124] >       "id": "bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9",
	I0813 00:04:31.586701  826514 command_runner.go:124] >       "repoTags": [
	I0813 00:04:31.586706  826514 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager:v1.21.3"
	I0813 00:04:31.586710  826514 command_runner.go:124] >       ],
	I0813 00:04:31.586714  826514 command_runner.go:124] >       "repoDigests": [
	I0813 00:04:31.586721  826514 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b",
	I0813 00:04:31.586734  826514 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager@sha256:7fb1f6614597c255b475ed8abf553e0d4e8ea211b06a90bed53eaddcfb9c354f"
	I0813 00:04:31.586739  826514 command_runner.go:124] >       ],
	I0813 00:04:31.586761  826514 command_runner.go:124] >       "size": "121087578",
	I0813 00:04:31.586768  826514 command_runner.go:124] >       "uid": {
	I0813 00:04:31.586772  826514 command_runner.go:124] >         "value": "0"
	I0813 00:04:31.586775  826514 command_runner.go:124] >       },
	I0813 00:04:31.586784  826514 command_runner.go:124] >       "username": "",
	I0813 00:04:31.586801  826514 command_runner.go:124] >       "spec": null
	I0813 00:04:31.586806  826514 command_runner.go:124] >     },
	I0813 00:04:31.586810  826514 command_runner.go:124] >     {
	I0813 00:04:31.586816  826514 command_runner.go:124] >       "id": "adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92",
	I0813 00:04:31.586822  826514 command_runner.go:124] >       "repoTags": [
	I0813 00:04:31.586827  826514 command_runner.go:124] >         "k8s.gcr.io/kube-proxy:v1.21.3"
	I0813 00:04:31.586833  826514 command_runner.go:124] >       ],
	I0813 00:04:31.586837  826514 command_runner.go:124] >       "repoDigests": [
	I0813 00:04:31.586844  826514 command_runner.go:124] >         "k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b",
	I0813 00:04:31.586855  826514 command_runner.go:124] >         "k8s.gcr.io/kube-proxy@sha256:c7778d7b97b2a822c3fe3e921d104ac42afbd38268de8df03557465780886627"
	I0813 00:04:31.586858  826514 command_runner.go:124] >       ],
	I0813 00:04:31.586862  826514 command_runner.go:124] >       "size": "105129702",
	I0813 00:04:31.586869  826514 command_runner.go:124] >       "uid": null,
	I0813 00:04:31.586873  826514 command_runner.go:124] >       "username": "",
	I0813 00:04:31.586877  826514 command_runner.go:124] >       "spec": null
	I0813 00:04:31.586880  826514 command_runner.go:124] >     },
	I0813 00:04:31.586884  826514 command_runner.go:124] >     {
	I0813 00:04:31.586890  826514 command_runner.go:124] >       "id": "6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a",
	I0813 00:04:31.586895  826514 command_runner.go:124] >       "repoTags": [
	I0813 00:04:31.586900  826514 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler:v1.21.3"
	I0813 00:04:31.586903  826514 command_runner.go:124] >       ],
	I0813 00:04:31.586907  826514 command_runner.go:124] >       "repoDigests": [
	I0813 00:04:31.586915  826514 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4",
	I0813 00:04:31.586924  826514 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler@sha256:b61779ea1bd936c137b25b3a7baa5551fbbd84fed8568d15c7c85ab1139521c0"
	I0813 00:04:31.586929  826514 command_runner.go:124] >       ],
	I0813 00:04:31.586933  826514 command_runner.go:124] >       "size": "51893338",
	I0813 00:04:31.586937  826514 command_runner.go:124] >       "uid": {
	I0813 00:04:31.586941  826514 command_runner.go:124] >         "value": "0"
	I0813 00:04:31.586945  826514 command_runner.go:124] >       },
	I0813 00:04:31.586949  826514 command_runner.go:124] >       "username": "",
	I0813 00:04:31.586952  826514 command_runner.go:124] >       "spec": null
	I0813 00:04:31.586956  826514 command_runner.go:124] >     },
	I0813 00:04:31.586959  826514 command_runner.go:124] >     {
	I0813 00:04:31.586966  826514 command_runner.go:124] >       "id": "0f8457a4c2ecaceac160805013dc3c61c63a1ff3dee74a473a36249a748e0253",
	I0813 00:04:31.586971  826514 command_runner.go:124] >       "repoTags": [
	I0813 00:04:31.586975  826514 command_runner.go:124] >         "k8s.gcr.io/pause:3.4.1"
	I0813 00:04:31.586981  826514 command_runner.go:124] >       ],
	I0813 00:04:31.586984  826514 command_runner.go:124] >       "repoDigests": [
	I0813 00:04:31.586992  826514 command_runner.go:124] >         "k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810",
	I0813 00:04:31.587002  826514 command_runner.go:124] >         "k8s.gcr.io/pause@sha256:914e745e524aa94315a25b49a7fafc0aa395e332126930593225d7a513f5a6b2"
	I0813 00:04:31.587005  826514 command_runner.go:124] >       ],
	I0813 00:04:31.587010  826514 command_runner.go:124] >       "size": "689817",
	I0813 00:04:31.587014  826514 command_runner.go:124] >       "uid": null,
	I0813 00:04:31.587018  826514 command_runner.go:124] >       "username": "",
	I0813 00:04:31.587022  826514 command_runner.go:124] >       "spec": null
	I0813 00:04:31.587026  826514 command_runner.go:124] >     }
	I0813 00:04:31.587029  826514 command_runner.go:124] >   ]
	I0813 00:04:31.587032  826514 command_runner.go:124] > }
	I0813 00:04:31.587284  826514 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 00:04:31.587302  826514 crio.go:333] Images already preloaded, skipping extraction
	I0813 00:04:31.587360  826514 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 00:04:31.628479  826514 command_runner.go:124] > {
	I0813 00:04:31.628499  826514 command_runner.go:124] >   "images": [
	I0813 00:04:31.628509  826514 command_runner.go:124] >     {
	I0813 00:04:31.628518  826514 command_runner.go:124] >       "id": "6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb",
	I0813 00:04:31.628527  826514 command_runner.go:124] >       "repoTags": [
	I0813 00:04:31.628536  826514 command_runner.go:124] >         "docker.io/kindest/kindnetd:v20210326-1e038dc5"
	I0813 00:04:31.628542  826514 command_runner.go:124] >       ],
	I0813 00:04:31.628550  826514 command_runner.go:124] >       "repoDigests": [
	I0813 00:04:31.628563  826514 command_runner.go:124] >         "docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1",
	I0813 00:04:31.628575  826514 command_runner.go:124] >         "docker.io/kindest/kindnetd@sha256:838bc1706e38391aefaa31fd52619fe8e57ad3dfb0d0ff414d902367fcc24c3c"
	I0813 00:04:31.628581  826514 command_runner.go:124] >       ],
	I0813 00:04:31.628586  826514 command_runner.go:124] >       "size": "119984626",
	I0813 00:04:31.628592  826514 command_runner.go:124] >       "uid": null,
	I0813 00:04:31.628596  826514 command_runner.go:124] >       "username": "",
	I0813 00:04:31.628602  826514 command_runner.go:124] >       "spec": null
	I0813 00:04:31.628608  826514 command_runner.go:124] >     },
	I0813 00:04:31.628612  826514 command_runner.go:124] >     {
	I0813 00:04:31.628622  826514 command_runner.go:124] >       "id": "9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db",
	I0813 00:04:31.628631  826514 command_runner.go:124] >       "repoTags": [
	I0813 00:04:31.628640  826514 command_runner.go:124] >         "docker.io/kubernetesui/dashboard:v2.1.0"
	I0813 00:04:31.628648  826514 command_runner.go:124] >       ],
	I0813 00:04:31.628653  826514 command_runner.go:124] >       "repoDigests": [
	I0813 00:04:31.628663  826514 command_runner.go:124] >         "docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f",
	I0813 00:04:31.628674  826514 command_runner.go:124] >         "docker.io/kubernetesui/dashboard@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6"
	I0813 00:04:31.628678  826514 command_runner.go:124] >       ],
	I0813 00:04:31.628682  826514 command_runner.go:124] >       "size": "228528983",
	I0813 00:04:31.628687  826514 command_runner.go:124] >       "uid": null,
	I0813 00:04:31.628692  826514 command_runner.go:124] >       "username": "nonroot",
	I0813 00:04:31.628700  826514 command_runner.go:124] >       "spec": null
	I0813 00:04:31.628707  826514 command_runner.go:124] >     },
	I0813 00:04:31.628712  826514 command_runner.go:124] >     {
	I0813 00:04:31.628725  826514 command_runner.go:124] >       "id": "86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4",
	I0813 00:04:31.628743  826514 command_runner.go:124] >       "repoTags": [
	I0813 00:04:31.628755  826514 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper:v1.0.4"
	I0813 00:04:31.628760  826514 command_runner.go:124] >       ],
	I0813 00:04:31.628765  826514 command_runner.go:124] >       "repoDigests": [
	I0813 00:04:31.628774  826514 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper@sha256:555981a24f184420f3be0c79d4efb6c948a85cfce84034f85a563f4151a81cbf",
	I0813 00:04:31.628786  826514 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper@sha256:d78f995c07124874c2a2e9b404cffa6bc6233668d63d6c6210574971f3d5914b"
	I0813 00:04:31.628792  826514 command_runner.go:124] >       ],
	I0813 00:04:31.628796  826514 command_runner.go:124] >       "size": "36950651",
	I0813 00:04:31.628802  826514 command_runner.go:124] >       "uid": null,
	I0813 00:04:31.628814  826514 command_runner.go:124] >       "username": "",
	I0813 00:04:31.628824  826514 command_runner.go:124] >       "spec": null
	I0813 00:04:31.628829  826514 command_runner.go:124] >     },
	I0813 00:04:31.628835  826514 command_runner.go:124] >     {
	I0813 00:04:31.628846  826514 command_runner.go:124] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0813 00:04:31.628856  826514 command_runner.go:124] >       "repoTags": [
	I0813 00:04:31.628867  826514 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0813 00:04:31.628876  826514 command_runner.go:124] >       ],
	I0813 00:04:31.628882  826514 command_runner.go:124] >       "repoDigests": [
	I0813 00:04:31.628893  826514 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0813 00:04:31.628904  826514 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0813 00:04:31.628911  826514 command_runner.go:124] >       ],
	I0813 00:04:31.628918  826514 command_runner.go:124] >       "size": "31470524",
	I0813 00:04:31.628930  826514 command_runner.go:124] >       "uid": null,
	I0813 00:04:31.628939  826514 command_runner.go:124] >       "username": "",
	I0813 00:04:31.628947  826514 command_runner.go:124] >       "spec": null
	I0813 00:04:31.628952  826514 command_runner.go:124] >     },
	I0813 00:04:31.628960  826514 command_runner.go:124] >     {
	I0813 00:04:31.628971  826514 command_runner.go:124] >       "id": "296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899",
	I0813 00:04:31.628980  826514 command_runner.go:124] >       "repoTags": [
	I0813 00:04:31.628989  826514 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns:v1.8.0"
	I0813 00:04:31.628996  826514 command_runner.go:124] >       ],
	I0813 00:04:31.629001  826514 command_runner.go:124] >       "repoDigests": [
	I0813 00:04:31.629012  826514 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61",
	I0813 00:04:31.629027  826514 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e"
	I0813 00:04:31.629037  826514 command_runner.go:124] >       ],
	I0813 00:04:31.629043  826514 command_runner.go:124] >       "size": "42585056",
	I0813 00:04:31.629052  826514 command_runner.go:124] >       "uid": null,
	I0813 00:04:31.629059  826514 command_runner.go:124] >       "username": "",
	I0813 00:04:31.629070  826514 command_runner.go:124] >       "spec": null
	I0813 00:04:31.629077  826514 command_runner.go:124] >     },
	I0813 00:04:31.629082  826514 command_runner.go:124] >     {
	I0813 00:04:31.629091  826514 command_runner.go:124] >       "id": "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934",
	I0813 00:04:31.629097  826514 command_runner.go:124] >       "repoTags": [
	I0813 00:04:31.629108  826514 command_runner.go:124] >         "k8s.gcr.io/etcd:3.4.13-0"
	I0813 00:04:31.629115  826514 command_runner.go:124] >       ],
	I0813 00:04:31.629121  826514 command_runner.go:124] >       "repoDigests": [
	I0813 00:04:31.629136  826514 command_runner.go:124] >         "k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2",
	I0813 00:04:31.629150  826514 command_runner.go:124] >         "k8s.gcr.io/etcd@sha256:bd4d2c9a19be8a492bc79df53eee199fd04b415e9993eb69f7718052602a147a"
	I0813 00:04:31.629164  826514 command_runner.go:124] >       ],
	I0813 00:04:31.629173  826514 command_runner.go:124] >       "size": "254662613",
	I0813 00:04:31.629177  826514 command_runner.go:124] >       "uid": null,
	I0813 00:04:31.629186  826514 command_runner.go:124] >       "username": "",
	I0813 00:04:31.629194  826514 command_runner.go:124] >       "spec": null
	I0813 00:04:31.629201  826514 command_runner.go:124] >     },
	I0813 00:04:31.629206  826514 command_runner.go:124] >     {
	I0813 00:04:31.629216  826514 command_runner.go:124] >       "id": "3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80",
	I0813 00:04:31.629226  826514 command_runner.go:124] >       "repoTags": [
	I0813 00:04:31.629234  826514 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver:v1.21.3"
	I0813 00:04:31.629242  826514 command_runner.go:124] >       ],
	I0813 00:04:31.629248  826514 command_runner.go:124] >       "repoDigests": [
	I0813 00:04:31.629260  826514 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2",
	I0813 00:04:31.629274  826514 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver@sha256:910cfdf034262c7b68ecb17c0885f39bdaaad07d87c9a5b6320819d8500b7ee5"
	I0813 00:04:31.629283  826514 command_runner.go:124] >       ],
	I0813 00:04:31.629289  826514 command_runner.go:124] >       "size": "126878961",
	I0813 00:04:31.629297  826514 command_runner.go:124] >       "uid": {
	I0813 00:04:31.629303  826514 command_runner.go:124] >         "value": "0"
	I0813 00:04:31.629312  826514 command_runner.go:124] >       },
	I0813 00:04:31.629318  826514 command_runner.go:124] >       "username": "",
	I0813 00:04:31.629327  826514 command_runner.go:124] >       "spec": null
	I0813 00:04:31.629332  826514 command_runner.go:124] >     },
	I0813 00:04:31.629341  826514 command_runner.go:124] >     {
	I0813 00:04:31.629351  826514 command_runner.go:124] >       "id": "bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9",
	I0813 00:04:31.629360  826514 command_runner.go:124] >       "repoTags": [
	I0813 00:04:31.629368  826514 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager:v1.21.3"
	I0813 00:04:31.629377  826514 command_runner.go:124] >       ],
	I0813 00:04:31.629383  826514 command_runner.go:124] >       "repoDigests": [
	I0813 00:04:31.629397  826514 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b",
	I0813 00:04:31.629413  826514 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager@sha256:7fb1f6614597c255b475ed8abf553e0d4e8ea211b06a90bed53eaddcfb9c354f"
	I0813 00:04:31.629421  826514 command_runner.go:124] >       ],
	I0813 00:04:31.629459  826514 command_runner.go:124] >       "size": "121087578",
	I0813 00:04:31.629470  826514 command_runner.go:124] >       "uid": {
	I0813 00:04:31.629476  826514 command_runner.go:124] >         "value": "0"
	I0813 00:04:31.629481  826514 command_runner.go:124] >       },
	I0813 00:04:31.629525  826514 command_runner.go:124] >       "username": "",
	I0813 00:04:31.629534  826514 command_runner.go:124] >       "spec": null
	I0813 00:04:31.629540  826514 command_runner.go:124] >     },
	I0813 00:04:31.629546  826514 command_runner.go:124] >     {
	I0813 00:04:31.629563  826514 command_runner.go:124] >       "id": "adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92",
	I0813 00:04:31.629572  826514 command_runner.go:124] >       "repoTags": [
	I0813 00:04:31.629580  826514 command_runner.go:124] >         "k8s.gcr.io/kube-proxy:v1.21.3"
	I0813 00:04:31.629587  826514 command_runner.go:124] >       ],
	I0813 00:04:31.629594  826514 command_runner.go:124] >       "repoDigests": [
	I0813 00:04:31.629607  826514 command_runner.go:124] >         "k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b",
	I0813 00:04:31.629622  826514 command_runner.go:124] >         "k8s.gcr.io/kube-proxy@sha256:c7778d7b97b2a822c3fe3e921d104ac42afbd38268de8df03557465780886627"
	I0813 00:04:31.629630  826514 command_runner.go:124] >       ],
	I0813 00:04:31.629637  826514 command_runner.go:124] >       "size": "105129702",
	I0813 00:04:31.629646  826514 command_runner.go:124] >       "uid": null,
	I0813 00:04:31.629651  826514 command_runner.go:124] >       "username": "",
	I0813 00:04:31.629658  826514 command_runner.go:124] >       "spec": null
	I0813 00:04:31.629663  826514 command_runner.go:124] >     },
	I0813 00:04:31.629669  826514 command_runner.go:124] >     {
	I0813 00:04:31.629680  826514 command_runner.go:124] >       "id": "6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a",
	I0813 00:04:31.629688  826514 command_runner.go:124] >       "repoTags": [
	I0813 00:04:31.629697  826514 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler:v1.21.3"
	I0813 00:04:31.629705  826514 command_runner.go:124] >       ],
	I0813 00:04:31.629712  826514 command_runner.go:124] >       "repoDigests": [
	I0813 00:04:31.629731  826514 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4",
	I0813 00:04:31.629748  826514 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler@sha256:b61779ea1bd936c137b25b3a7baa5551fbbd84fed8568d15c7c85ab1139521c0"
	I0813 00:04:31.629757  826514 command_runner.go:124] >       ],
	I0813 00:04:31.629764  826514 command_runner.go:124] >       "size": "51893338",
	I0813 00:04:31.629771  826514 command_runner.go:124] >       "uid": {
	I0813 00:04:31.629778  826514 command_runner.go:124] >         "value": "0"
	I0813 00:04:31.629784  826514 command_runner.go:124] >       },
	I0813 00:04:31.629795  826514 command_runner.go:124] >       "username": "",
	I0813 00:04:31.629804  826514 command_runner.go:124] >       "spec": null
	I0813 00:04:31.629809  826514 command_runner.go:124] >     },
	I0813 00:04:31.629815  826514 command_runner.go:124] >     {
	I0813 00:04:31.629825  826514 command_runner.go:124] >       "id": "0f8457a4c2ecaceac160805013dc3c61c63a1ff3dee74a473a36249a748e0253",
	I0813 00:04:31.629834  826514 command_runner.go:124] >       "repoTags": [
	I0813 00:04:31.629841  826514 command_runner.go:124] >         "k8s.gcr.io/pause:3.4.1"
	I0813 00:04:31.629847  826514 command_runner.go:124] >       ],
	I0813 00:04:31.629852  826514 command_runner.go:124] >       "repoDigests": [
	I0813 00:04:31.629864  826514 command_runner.go:124] >         "k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810",
	I0813 00:04:31.629878  826514 command_runner.go:124] >         "k8s.gcr.io/pause@sha256:914e745e524aa94315a25b49a7fafc0aa395e332126930593225d7a513f5a6b2"
	I0813 00:04:31.629886  826514 command_runner.go:124] >       ],
	I0813 00:04:31.629893  826514 command_runner.go:124] >       "size": "689817",
	I0813 00:04:31.629912  826514 command_runner.go:124] >       "uid": null,
	I0813 00:04:31.629921  826514 command_runner.go:124] >       "username": "",
	I0813 00:04:31.629928  826514 command_runner.go:124] >       "spec": null
	I0813 00:04:31.629933  826514 command_runner.go:124] >     }
	I0813 00:04:31.629939  826514 command_runner.go:124] >   ]
	I0813 00:04:31.629945  826514 command_runner.go:124] > }
	I0813 00:04:31.630103  826514 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 00:04:31.630116  826514 cache_images.go:74] Images are preloaded, skipping loading
	I0813 00:04:31.630195  826514 ssh_runner.go:149] Run: crio config
	I0813 00:04:31.716486  826514 command_runner.go:124] > # The CRI-O configuration file specifies all of the available configuration
	I0813 00:04:31.716523  826514 command_runner.go:124] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0813 00:04:31.716534  826514 command_runner.go:124] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0813 00:04:31.716538  826514 command_runner.go:124] > #
	I0813 00:04:31.716549  826514 command_runner.go:124] > # Please refer to crio.conf(5) for details of all configuration options.
	I0813 00:04:31.716559  826514 command_runner.go:124] > # CRI-O supports partial configuration reload during runtime, which can be
	I0813 00:04:31.716569  826514 command_runner.go:124] > # done by sending SIGHUP to the running process. Currently supported options
	I0813 00:04:31.716584  826514 command_runner.go:124] > # are explicitly mentioned with: 'This option supports live configuration
	I0813 00:04:31.716598  826514 command_runner.go:124] > # reload'.
	I0813 00:04:31.716608  826514 command_runner.go:124] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0813 00:04:31.716620  826514 command_runner.go:124] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0813 00:04:31.716634  826514 command_runner.go:124] > # you want to change the system's defaults. If you want to modify storage just
	I0813 00:04:31.716657  826514 command_runner.go:124] > # for CRI-O, you can change the storage configuration options here.
	I0813 00:04:31.716664  826514 command_runner.go:124] > [crio]
	I0813 00:04:31.716675  826514 command_runner.go:124] > # Path to the "root directory". CRI-O stores all of its data, including
	I0813 00:04:31.716684  826514 command_runner.go:124] > # containers images, in this directory.
	I0813 00:04:31.716725  826514 command_runner.go:124] > #root = "/var/lib/containers/storage"
	I0813 00:04:31.716757  826514 command_runner.go:124] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0813 00:04:31.716776  826514 command_runner.go:124] > #runroot = "/var/run/containers/storage"
	I0813 00:04:31.716789  826514 command_runner.go:124] > # Storage driver used to manage the storage of images and containers. Please
	I0813 00:04:31.716803  826514 command_runner.go:124] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0813 00:04:31.716813  826514 command_runner.go:124] > #storage_driver = "overlay"
	I0813 00:04:31.716823  826514 command_runner.go:124] > # List to pass options to the storage driver. Please refer to
	I0813 00:04:31.716834  826514 command_runner.go:124] > # containers-storage.conf(5) to see all available storage options.
	I0813 00:04:31.716841  826514 command_runner.go:124] > #storage_option = [
	I0813 00:04:31.716846  826514 command_runner.go:124] > #]
	I0813 00:04:31.716858  826514 command_runner.go:124] > # The default log directory where all logs will go unless directly specified by
	I0813 00:04:31.716871  826514 command_runner.go:124] > # the kubelet. The log directory specified must be an absolute directory.
	I0813 00:04:31.716879  826514 command_runner.go:124] > log_dir = "/var/log/crio/pods"
	I0813 00:04:31.716888  826514 command_runner.go:124] > # Location for CRI-O to lay down the temporary version file.
	I0813 00:04:31.716910  826514 command_runner.go:124] > # It is used to check if crio wipe should wipe containers, which should
	I0813 00:04:31.716920  826514 command_runner.go:124] > # always happen on a node reboot
	I0813 00:04:31.716928  826514 command_runner.go:124] > version_file = "/var/run/crio/version"
	I0813 00:04:31.716940  826514 command_runner.go:124] > # Location for CRI-O to lay down the persistent version file.
	I0813 00:04:31.716950  826514 command_runner.go:124] > # It is used to check if crio wipe should wipe images, which should
	I0813 00:04:31.716958  826514 command_runner.go:124] > # only happen when CRI-O has been upgraded
	I0813 00:04:31.716974  826514 command_runner.go:124] > version_file_persist = "/var/lib/crio/version"
	I0813 00:04:31.716986  826514 command_runner.go:124] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0813 00:04:31.716992  826514 command_runner.go:124] > [crio.api]
	I0813 00:04:31.717001  826514 command_runner.go:124] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0813 00:04:31.717011  826514 command_runner.go:124] > listen = "/var/run/crio/crio.sock"
	I0813 00:04:31.717020  826514 command_runner.go:124] > # IP address on which the stream server will listen.
	I0813 00:04:31.717029  826514 command_runner.go:124] > stream_address = "127.0.0.1"
	I0813 00:04:31.717041  826514 command_runner.go:124] > # The port on which the stream server will listen. If the port is set to "0", then
	I0813 00:04:31.717051  826514 command_runner.go:124] > # CRI-O will allocate a random free port number.
	I0813 00:04:31.717058  826514 command_runner.go:124] > stream_port = "0"
	I0813 00:04:31.717069  826514 command_runner.go:124] > # Enable encrypted TLS transport of the stream server.
	I0813 00:04:31.717075  826514 command_runner.go:124] > stream_enable_tls = false
	I0813 00:04:31.717084  826514 command_runner.go:124] > # Length of time until open streams terminate due to lack of activity
	I0813 00:04:31.717090  826514 command_runner.go:124] > stream_idle_timeout = ""
	I0813 00:04:31.717099  826514 command_runner.go:124] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0813 00:04:31.717110  826514 command_runner.go:124] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0813 00:04:31.717116  826514 command_runner.go:124] > # minutes.
	I0813 00:04:31.717122  826514 command_runner.go:124] > stream_tls_cert = ""
	I0813 00:04:31.717131  826514 command_runner.go:124] > # Path to the key file used to serve the encrypted stream. This file can
	I0813 00:04:31.717142  826514 command_runner.go:124] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0813 00:04:31.717148  826514 command_runner.go:124] > stream_tls_key = ""
	I0813 00:04:31.717163  826514 command_runner.go:124] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0813 00:04:31.717174  826514 command_runner.go:124] > # communication with the encrypted stream. This file can change and CRI-O will
	I0813 00:04:31.717187  826514 command_runner.go:124] > # automatically pick up the changes within 5 minutes.
	I0813 00:04:31.717193  826514 command_runner.go:124] > stream_tls_ca = ""
	I0813 00:04:31.717207  826514 command_runner.go:124] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0813 00:04:31.717217  826514 command_runner.go:124] > grpc_max_send_msg_size = 16777216
	I0813 00:04:31.717236  826514 command_runner.go:124] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0813 00:04:31.717246  826514 command_runner.go:124] > grpc_max_recv_msg_size = 16777216
	I0813 00:04:31.717257  826514 command_runner.go:124] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0813 00:04:31.717268  826514 command_runner.go:124] > # and options for how to set up and manage the OCI runtime.
	I0813 00:04:31.717274  826514 command_runner.go:124] > [crio.runtime]
	I0813 00:04:31.717285  826514 command_runner.go:124] > # A list of ulimits to be set in containers by default, specified as
	I0813 00:04:31.717295  826514 command_runner.go:124] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0813 00:04:31.717302  826514 command_runner.go:124] > # "nofile=1024:2048"
	I0813 00:04:31.717312  826514 command_runner.go:124] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0813 00:04:31.717322  826514 command_runner.go:124] > #default_ulimits = [
	I0813 00:04:31.717329  826514 command_runner.go:124] > #]
	I0813 00:04:31.717340  826514 command_runner.go:124] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0813 00:04:31.717349  826514 command_runner.go:124] > no_pivot = false
	I0813 00:04:31.717358  826514 command_runner.go:124] > # decryption_keys_path is the path where the keys required for
	I0813 00:04:31.717405  826514 command_runner.go:124] > # image decryption are stored. This option supports live configuration reload.
	I0813 00:04:31.717416  826514 command_runner.go:124] > decryption_keys_path = "/etc/crio/keys/"
	I0813 00:04:31.717425  826514 command_runner.go:124] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0813 00:04:31.717434  826514 command_runner.go:124] > # Will be searched for using $PATH if empty.
	I0813 00:04:31.717444  826514 command_runner.go:124] > conmon = "/usr/libexec/crio/conmon"
	I0813 00:04:31.717452  826514 command_runner.go:124] > # Cgroup setting for conmon
	I0813 00:04:31.717461  826514 command_runner.go:124] > conmon_cgroup = "system.slice"
	I0813 00:04:31.717472  826514 command_runner.go:124] > # Environment variable list for the conmon process, used for passing necessary
	I0813 00:04:31.717483  826514 command_runner.go:124] > # environment variables to conmon or the runtime.
	I0813 00:04:31.717489  826514 command_runner.go:124] > conmon_env = [
	I0813 00:04:31.717499  826514 command_runner.go:124] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0813 00:04:31.717507  826514 command_runner.go:124] > ]
	I0813 00:04:31.717516  826514 command_runner.go:124] > # Additional environment variables to set for all the
	I0813 00:04:31.717531  826514 command_runner.go:124] > # containers. These are overridden if set in the
	I0813 00:04:31.717543  826514 command_runner.go:124] > # container image spec or in the container runtime configuration.
	I0813 00:04:31.717549  826514 command_runner.go:124] > default_env = [
	I0813 00:04:31.717554  826514 command_runner.go:124] > ]
	I0813 00:04:31.717564  826514 command_runner.go:124] > # If true, SELinux will be used for pod separation on the host.
	I0813 00:04:31.717573  826514 command_runner.go:124] > selinux = false
	I0813 00:04:31.717590  826514 command_runner.go:124] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0813 00:04:31.717604  826514 command_runner.go:124] > # for the runtime. If not specified, then the internal default seccomp profile
	I0813 00:04:31.717614  826514 command_runner.go:124] > # will be used. This option supports live configuration reload.
	I0813 00:04:31.717621  826514 command_runner.go:124] > seccomp_profile = ""
	I0813 00:04:31.717631  826514 command_runner.go:124] > # Changes the meaning of an empty seccomp profile. By default
	I0813 00:04:31.717643  826514 command_runner.go:124] > # (and according to CRI spec), an empty profile means unconfined.
	I0813 00:04:31.717653  826514 command_runner.go:124] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0813 00:04:31.717663  826514 command_runner.go:124] > # which might increase security.
	I0813 00:04:31.717671  826514 command_runner.go:124] > seccomp_use_default_when_empty = false
	I0813 00:04:31.717684  826514 command_runner.go:124] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0813 00:04:31.717695  826514 command_runner.go:124] > # profile name is "crio-default". This profile only takes effect if the user
	I0813 00:04:31.717707  826514 command_runner.go:124] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0813 00:04:31.717721  826514 command_runner.go:124] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0813 00:04:31.717733  826514 command_runner.go:124] > # This option supports live configuration reload.
	I0813 00:04:31.717740  826514 command_runner.go:124] > apparmor_profile = "crio-default"
	I0813 00:04:31.717754  826514 command_runner.go:124] > # Used to change irqbalance service config file path which is used for configuring
	I0813 00:04:31.717761  826514 command_runner.go:124] > # irqbalance daemon.
	I0813 00:04:31.717773  826514 command_runner.go:124] > irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0813 00:04:31.717782  826514 command_runner.go:124] > # Cgroup management implementation used for the runtime.
	I0813 00:04:31.717790  826514 command_runner.go:124] > cgroup_manager = "systemd"
	I0813 00:04:31.717800  826514 command_runner.go:124] > # Specify whether the image pull must be performed in a separate cgroup.
	I0813 00:04:31.717810  826514 command_runner.go:124] > separate_pull_cgroup = ""
	I0813 00:04:31.717821  826514 command_runner.go:124] > # List of default capabilities for containers. If it is empty or commented out,
	I0813 00:04:31.717834  826514 command_runner.go:124] > # only the capabilities defined in the containers json file by the user/kube
	I0813 00:04:31.717842  826514 command_runner.go:124] > # will be added.
	I0813 00:04:31.717849  826514 command_runner.go:124] > default_capabilities = [
	I0813 00:04:31.717855  826514 command_runner.go:124] > 	"CHOWN",
	I0813 00:04:31.717861  826514 command_runner.go:124] > 	"DAC_OVERRIDE",
	I0813 00:04:31.717867  826514 command_runner.go:124] > 	"FSETID",
	I0813 00:04:31.717873  826514 command_runner.go:124] > 	"FOWNER",
	I0813 00:04:31.717879  826514 command_runner.go:124] > 	"SETGID",
	I0813 00:04:31.717885  826514 command_runner.go:124] > 	"SETUID",
	I0813 00:04:31.717891  826514 command_runner.go:124] > 	"SETPCAP",
	I0813 00:04:31.717897  826514 command_runner.go:124] > 	"NET_BIND_SERVICE",
	I0813 00:04:31.717905  826514 command_runner.go:124] > 	"KILL",
	I0813 00:04:31.717913  826514 command_runner.go:124] > ]
	I0813 00:04:31.717926  826514 command_runner.go:124] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0813 00:04:31.717939  826514 command_runner.go:124] > # defined in the container json file by the user/kube will be added.
	I0813 00:04:31.717945  826514 command_runner.go:124] > default_sysctls = [
	I0813 00:04:31.717957  826514 command_runner.go:124] > ]
	I0813 00:04:31.717968  826514 command_runner.go:124] > # List of additional devices. specified as
	I0813 00:04:31.717982  826514 command_runner.go:124] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0813 00:04:31.717993  826514 command_runner.go:124] > #If it is empty or commented out, only the devices
	I0813 00:04:31.718004  826514 command_runner.go:124] > # defined in the container json file by the user/kube will be added.
	I0813 00:04:31.718012  826514 command_runner.go:124] > additional_devices = [
	I0813 00:04:31.718017  826514 command_runner.go:124] > ]
	I0813 00:04:31.718028  826514 command_runner.go:124] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0813 00:04:31.718038  826514 command_runner.go:124] > # directories does not exist, then CRI-O will automatically skip them.
	I0813 00:04:31.718046  826514 command_runner.go:124] > hooks_dir = [
	I0813 00:04:31.718053  826514 command_runner.go:124] > 	"/usr/share/containers/oci/hooks.d",
	I0813 00:04:31.718060  826514 command_runner.go:124] > ]
	I0813 00:04:31.718070  826514 command_runner.go:124] > # Path to the file specifying the defaults mounts for each container. The
	I0813 00:04:31.718084  826514 command_runner.go:124] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0813 00:04:31.718095  826514 command_runner.go:124] > # its default mounts from the following two files:
	I0813 00:04:31.718100  826514 command_runner.go:124] > #
	I0813 00:04:31.718110  826514 command_runner.go:124] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0813 00:04:31.718123  826514 command_runner.go:124] > #      override file, where users can either add in their own default mounts, or
	I0813 00:04:31.718132  826514 command_runner.go:124] > #      override the default mounts shipped with the package.
	I0813 00:04:31.718140  826514 command_runner.go:124] > #
	I0813 00:04:31.718152  826514 command_runner.go:124] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0813 00:04:31.718166  826514 command_runner.go:124] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0813 00:04:31.718179  826514 command_runner.go:124] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0813 00:04:31.718190  826514 command_runner.go:124] > #      only add mounts it finds in this file.
	I0813 00:04:31.718195  826514 command_runner.go:124] > #
	I0813 00:04:31.718202  826514 command_runner.go:124] > #default_mounts_file = ""
	I0813 00:04:31.718210  826514 command_runner.go:124] > # Maximum number of processes allowed in a container.
	I0813 00:04:31.718216  826514 command_runner.go:124] > pids_limit = 1024
	I0813 00:04:31.718230  826514 command_runner.go:124] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0813 00:04:31.718244  826514 command_runner.go:124] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0813 00:04:31.718255  826514 command_runner.go:124] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0813 00:04:31.718286  826514 command_runner.go:124] > # limit is never exceeded.
	I0813 00:04:31.718295  826514 command_runner.go:124] > log_size_max = -1
	I0813 00:04:31.718356  826514 command_runner.go:124] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0813 00:04:31.718366  826514 command_runner.go:124] > log_to_journald = false
	I0813 00:04:31.718375  826514 command_runner.go:124] > # Path to directory in which container exit files are written to by conmon.
	I0813 00:04:31.718386  826514 command_runner.go:124] > container_exits_dir = "/var/run/crio/exits"
	I0813 00:04:31.718397  826514 command_runner.go:124] > # Path to directory for container attach sockets.
	I0813 00:04:31.718406  826514 command_runner.go:124] > container_attach_socket_dir = "/var/run/crio"
	I0813 00:04:31.718424  826514 command_runner.go:124] > # The prefix to use for the source of the bind mounts.
	I0813 00:04:31.718433  826514 command_runner.go:124] > bind_mount_prefix = ""
	I0813 00:04:31.718442  826514 command_runner.go:124] > # If set to true, all containers will run in read-only mode.
	I0813 00:04:31.718451  826514 command_runner.go:124] > read_only = false
	I0813 00:04:31.718461  826514 command_runner.go:124] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0813 00:04:31.718474  826514 command_runner.go:124] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0813 00:04:31.718482  826514 command_runner.go:124] > # live configuration reload.
	I0813 00:04:31.718488  826514 command_runner.go:124] > log_level = "info"
	I0813 00:04:31.718497  826514 command_runner.go:124] > # Filter the log messages by the provided regular expression.
	I0813 00:04:31.718506  826514 command_runner.go:124] > # This option supports live configuration reload.
	I0813 00:04:31.718514  826514 command_runner.go:124] > log_filter = ""
	I0813 00:04:31.718524  826514 command_runner.go:124] > # The UID mappings for the user namespace of each container. A range is
	I0813 00:04:31.718537  826514 command_runner.go:124] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0813 00:04:31.718548  826514 command_runner.go:124] > # separated by comma.
	I0813 00:04:31.718555  826514 command_runner.go:124] > uid_mappings = ""
	I0813 00:04:31.718566  826514 command_runner.go:124] > # The GID mappings for the user namespace of each container. A range is
	I0813 00:04:31.718579  826514 command_runner.go:124] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0813 00:04:31.718586  826514 command_runner.go:124] > # separated by comma.
	I0813 00:04:31.718592  826514 command_runner.go:124] > gid_mappings = ""
	I0813 00:04:31.718603  826514 command_runner.go:124] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0813 00:04:31.718613  826514 command_runner.go:124] > # regarding the proper termination of the container. The lowest possible
	I0813 00:04:31.718625  826514 command_runner.go:124] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0813 00:04:31.718632  826514 command_runner.go:124] > ctr_stop_timeout = 30
	I0813 00:04:31.718642  826514 command_runner.go:124] > # manage_ns_lifecycle determines whether we pin and remove namespaces
	I0813 00:04:31.718649  826514 command_runner.go:124] > # and manage their lifecycle.
	I0813 00:04:31.718660  826514 command_runner.go:124] > # This option is being deprecated, and will be unconditionally true in the future.
	I0813 00:04:31.718670  826514 command_runner.go:124] > manage_ns_lifecycle = true
	I0813 00:04:31.718681  826514 command_runner.go:124] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0813 00:04:31.718691  826514 command_runner.go:124] > # when a pod does not have a private PID namespace, and does not use
	I0813 00:04:31.718699  826514 command_runner.go:124] > # a kernel separating runtime (like kata).
	I0813 00:04:31.718708  826514 command_runner.go:124] > # It requires manage_ns_lifecycle to be true.
	I0813 00:04:31.718717  826514 command_runner.go:124] > drop_infra_ctr = false
	I0813 00:04:31.718728  826514 command_runner.go:124] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0813 00:04:31.718740  826514 command_runner.go:124] > # You can use linux CPU list format to specify desired CPUs.
	I0813 00:04:31.718753  826514 command_runner.go:124] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0813 00:04:31.718765  826514 command_runner.go:124] > # infra_ctr_cpuset = ""
	I0813 00:04:31.718775  826514 command_runner.go:124] > # The directory where the state of the managed namespaces gets tracked.
	I0813 00:04:31.718786  826514 command_runner.go:124] > # Only used when manage_ns_lifecycle is true.
	I0813 00:04:31.718793  826514 command_runner.go:124] > namespaces_dir = "/var/run"
	I0813 00:04:31.718808  826514 command_runner.go:124] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0813 00:04:31.718817  826514 command_runner.go:124] > pinns_path = "/usr/bin/pinns"
	I0813 00:04:31.718827  826514 command_runner.go:124] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0813 00:04:31.718835  826514 command_runner.go:124] > # The name is matched against the runtimes map below. If this value is changed,
	I0813 00:04:31.718845  826514 command_runner.go:124] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0813 00:04:31.718852  826514 command_runner.go:124] > default_runtime = "runc"
	I0813 00:04:31.718862  826514 command_runner.go:124] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0813 00:04:31.718873  826514 command_runner.go:124] > # The runtime to use is picked based on the runtime_handler provided by the CRI.
	I0813 00:04:31.718884  826514 command_runner.go:124] > # If no runtime_handler is provided, the runtime will be picked based on the level
	I0813 00:04:31.718894  826514 command_runner.go:124] > # of trust of the workload. Each entry in the table should follow the format:
	I0813 00:04:31.718899  826514 command_runner.go:124] > #
	I0813 00:04:31.718907  826514 command_runner.go:124] > #[crio.runtime.runtimes.runtime-handler]
	I0813 00:04:31.718915  826514 command_runner.go:124] > #  runtime_path = "/path/to/the/executable"
	I0813 00:04:31.718923  826514 command_runner.go:124] > #  runtime_type = "oci"
	I0813 00:04:31.718931  826514 command_runner.go:124] > #  runtime_root = "/path/to/the/root"
	I0813 00:04:31.718939  826514 command_runner.go:124] > #  privileged_without_host_devices = false
	I0813 00:04:31.718946  826514 command_runner.go:124] > #  allowed_annotations = []
	I0813 00:04:31.718952  826514 command_runner.go:124] > # Where:
	I0813 00:04:31.718960  826514 command_runner.go:124] > # - runtime-handler: name used to identify the runtime
	I0813 00:04:31.718971  826514 command_runner.go:124] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0813 00:04:31.718982  826514 command_runner.go:124] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0813 00:04:31.718992  826514 command_runner.go:124] > #   the runtime executable name, and the runtime executable should be placed
	I0813 00:04:31.718998  826514 command_runner.go:124] > #   in $PATH.
	I0813 00:04:31.719009  826514 command_runner.go:124] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0813 00:04:31.719018  826514 command_runner.go:124] > #   omitted, an "oci" runtime is assumed.
	I0813 00:04:31.719030  826514 command_runner.go:124] > # - runtime_root (optional, string): root directory for storage of containers
	I0813 00:04:31.719036  826514 command_runner.go:124] > #   state.
	I0813 00:04:31.719047  826514 command_runner.go:124] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0813 00:04:31.719059  826514 command_runner.go:124] > #   host devices from being passed to privileged containers.
	I0813 00:04:31.719071  826514 command_runner.go:124] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0813 00:04:31.719112  826514 command_runner.go:124] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0813 00:04:31.719123  826514 command_runner.go:124] > #   The currently recognized values are:
	I0813 00:04:31.719134  826514 command_runner.go:124] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0813 00:04:31.719150  826514 command_runner.go:124] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0813 00:04:31.719163  826514 command_runner.go:124] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0813 00:04:31.719173  826514 command_runner.go:124] > [crio.runtime.runtimes.runc]
	I0813 00:04:31.719180  826514 command_runner.go:124] > runtime_path = "/usr/bin/runc"
	I0813 00:04:31.719187  826514 command_runner.go:124] > runtime_type = "oci"
	I0813 00:04:31.719194  826514 command_runner.go:124] > runtime_root = "/run/runc"
	I0813 00:04:31.719210  826514 command_runner.go:124] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0813 00:04:31.719219  826514 command_runner.go:124] > # running containers
	I0813 00:04:31.719231  826514 command_runner.go:124] > #[crio.runtime.runtimes.crun]
	I0813 00:04:31.719246  826514 command_runner.go:124] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0813 00:04:31.719260  826514 command_runner.go:124] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0813 00:04:31.719272  826514 command_runner.go:124] > # surface and mitigating the consequences of containers breakout.
	I0813 00:04:31.719280  826514 command_runner.go:124] > # Kata Containers with the default configured VMM
	I0813 00:04:31.719290  826514 command_runner.go:124] > #[crio.runtime.runtimes.kata-runtime]
	I0813 00:04:31.719299  826514 command_runner.go:124] > # Kata Containers with the QEMU VMM
	I0813 00:04:31.719310  826514 command_runner.go:124] > #[crio.runtime.runtimes.kata-qemu]
	I0813 00:04:31.719320  826514 command_runner.go:124] > # Kata Containers with the Firecracker VMM
	I0813 00:04:31.719327  826514 command_runner.go:124] > #[crio.runtime.runtimes.kata-fc]
	I0813 00:04:31.719339  826514 command_runner.go:124] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0813 00:04:31.719346  826514 command_runner.go:124] > #
	I0813 00:04:31.719356  826514 command_runner.go:124] > # CRI-O reads its configured registries defaults from the system wide
	I0813 00:04:31.719369  826514 command_runner.go:124] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0813 00:04:31.719380  826514 command_runner.go:124] > # you want to modify just CRI-O, you can change the registries configuration in
	I0813 00:04:31.719393  826514 command_runner.go:124] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0813 00:04:31.719405  826514 command_runner.go:124] > # use the system's defaults from /etc/containers/registries.conf.
	I0813 00:04:31.719411  826514 command_runner.go:124] > [crio.image]
	I0813 00:04:31.719422  826514 command_runner.go:124] > # Default transport for pulling images from a remote container storage.
	I0813 00:04:31.719431  826514 command_runner.go:124] > default_transport = "docker://"
	I0813 00:04:31.719442  826514 command_runner.go:124] > # The path to a file containing credentials necessary for pulling images from
	I0813 00:04:31.719454  826514 command_runner.go:124] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0813 00:04:31.719461  826514 command_runner.go:124] > global_auth_file = ""
	I0813 00:04:31.719470  826514 command_runner.go:124] > # The image used to instantiate infra containers.
	I0813 00:04:31.719481  826514 command_runner.go:124] > # This option supports live configuration reload.
	I0813 00:04:31.719489  826514 command_runner.go:124] > pause_image = "k8s.gcr.io/pause:3.4.1"
	I0813 00:04:31.719500  826514 command_runner.go:124] > # The path to a file containing credentials specific for pulling the pause_image from
	I0813 00:04:31.719512  826514 command_runner.go:124] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0813 00:04:31.719522  826514 command_runner.go:124] > # This option supports live configuration reload.
	I0813 00:04:31.719528  826514 command_runner.go:124] > pause_image_auth_file = ""
	I0813 00:04:31.719538  826514 command_runner.go:124] > # The command to run to have a container stay in the paused state.
	I0813 00:04:31.719556  826514 command_runner.go:124] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0813 00:04:31.719569  826514 command_runner.go:124] > # specified in the pause image. When commented out, it will fallback to the
	I0813 00:04:31.719584  826514 command_runner.go:124] > # default: "/pause". This option supports live configuration reload.
	I0813 00:04:31.719594  826514 command_runner.go:124] > pause_command = "/pause"
	I0813 00:04:31.719605  826514 command_runner.go:124] > # Path to the file which decides what sort of policy we use when deciding
	I0813 00:04:31.719618  826514 command_runner.go:124] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0813 00:04:31.719630  826514 command_runner.go:124] > # this option be used, as the default behavior of using the system-wide default
	I0813 00:04:31.719640  826514 command_runner.go:124] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0813 00:04:31.719649  826514 command_runner.go:124] > # refer to containers-policy.json(5) for more details.
	I0813 00:04:31.719658  826514 command_runner.go:124] > signature_policy = ""
	I0813 00:04:31.719669  826514 command_runner.go:124] > # List of registries to skip TLS verification for pulling images. Please
	I0813 00:04:31.719681  826514 command_runner.go:124] > # consider configuring the registries via /etc/containers/registries.conf before
	I0813 00:04:31.719689  826514 command_runner.go:124] > # changing them here.
	I0813 00:04:31.719697  826514 command_runner.go:124] > #insecure_registries = "[]"
	I0813 00:04:31.719723  826514 command_runner.go:124] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0813 00:04:31.719737  826514 command_runner.go:124] > # ignore; the latter will ignore volumes entirely.
	I0813 00:04:31.719743  826514 command_runner.go:124] > image_volumes = "mkdir"
	I0813 00:04:31.719754  826514 command_runner.go:124] > # List of registries to be used when pulling an unqualified image (e.g.,
	I0813 00:04:31.719766  826514 command_runner.go:124] > # "alpine:latest"). By default, registries is set to "docker.io" for
	I0813 00:04:31.719777  826514 command_runner.go:124] > # compatibility reasons. Depending on your workload and usecase you may add more
	I0813 00:04:31.719786  826514 command_runner.go:124] > # registries (e.g., "quay.io", "registry.fedoraproject.org",
	I0813 00:04:31.719796  826514 command_runner.go:124] > # "registry.opensuse.org", etc.).
	I0813 00:04:31.719803  826514 command_runner.go:124] > #registries = [
	I0813 00:04:31.719809  826514 command_runner.go:124] > # 	"docker.io",
	I0813 00:04:31.719814  826514 command_runner.go:124] > #]
	I0813 00:04:31.719824  826514 command_runner.go:124] > # Temporary directory to use for storing big files
	I0813 00:04:31.719832  826514 command_runner.go:124] > big_files_temporary_dir = ""
	I0813 00:04:31.719843  826514 command_runner.go:124] > # The crio.network table containers settings pertaining to the management of
	I0813 00:04:31.719850  826514 command_runner.go:124] > # CNI plugins.
	I0813 00:04:31.719856  826514 command_runner.go:124] > [crio.network]
	I0813 00:04:31.719866  826514 command_runner.go:124] > # The default CNI network name to be selected. If not set or "", then
	I0813 00:04:31.719875  826514 command_runner.go:124] > # CRI-O will pick-up the first one found in network_dir.
	I0813 00:04:31.719883  826514 command_runner.go:124] > # cni_default_network = "kindnet"
	I0813 00:04:31.719895  826514 command_runner.go:124] > # Path to the directory where CNI configuration files are located.
	I0813 00:04:31.719905  826514 command_runner.go:124] > network_dir = "/etc/cni/net.d/"
	I0813 00:04:31.719914  826514 command_runner.go:124] > # Paths to directories where CNI plugin binaries are located.
	I0813 00:04:31.719927  826514 command_runner.go:124] > plugin_dirs = [
	I0813 00:04:31.719933  826514 command_runner.go:124] > 	"/opt/cni/bin/",
	I0813 00:04:31.719938  826514 command_runner.go:124] > ]
	I0813 00:04:31.719952  826514 command_runner.go:124] > # A necessary configuration for Prometheus based metrics retrieval
	I0813 00:04:31.719961  826514 command_runner.go:124] > [crio.metrics]
	I0813 00:04:31.719969  826514 command_runner.go:124] > # Globally enable or disable metrics support.
	I0813 00:04:31.719978  826514 command_runner.go:124] > enable_metrics = true
	I0813 00:04:31.719987  826514 command_runner.go:124] > # The port on which the metrics server will listen.
	I0813 00:04:31.719997  826514 command_runner.go:124] > metrics_port = 9090
	I0813 00:04:31.720031  826514 command_runner.go:124] > # Local socket path to bind the metrics server to
	I0813 00:04:31.720040  826514 command_runner.go:124] > metrics_socket = ""
	I0813 00:04:31.720089  826514 command_runner.go:124] ! time="2021-08-13T00:04:31Z" level=info msg="Starting CRI-O, version: 1.20.2, git: d5a999ad0a35d895ded554e1e18c142075501a98(clean)"
	I0813 00:04:31.720112  826514 command_runner.go:124] ! time="2021-08-13T00:04:31Z" level=warning msg="The 'registries' option in crio.conf(5) (referenced in \"/etc/crio/crio.conf\") has been deprecated and will be removed with CRI-O 1.21."
	I0813 00:04:31.720129  826514 command_runner.go:124] ! time="2021-08-13T00:04:31Z" level=warning msg="Please refer to containers-registries.conf(5) for configuring unqualified-search registries."
	I0813 00:04:31.720150  826514 command_runner.go:124] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0813 00:04:31.720223  826514 cni.go:93] Creating CNI manager for ""
	I0813 00:04:31.720243  826514 cni.go:154] 1 nodes found, recommending kindnet
	I0813 00:04:31.720298  826514 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 00:04:31.720320  826514 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.22 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-20210813000359-820289 NodeName:multinode-20210813000359-820289 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.22"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.39.22 CgroupDriver:systemd ClientCAFile:
/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 00:04:31.720474  826514 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.22
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "multinode-20210813000359-820289"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.22
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.22"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 00:04:31.720582  826514 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --hostname-override=multinode-20210813000359-820289 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.22 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:multinode-20210813000359-820289 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 00:04:31.720645  826514 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 00:04:31.727820  826514 command_runner.go:124] > kubeadm
	I0813 00:04:31.727842  826514 command_runner.go:124] > kubectl
	I0813 00:04:31.727848  826514 command_runner.go:124] > kubelet
	I0813 00:04:31.728062  826514 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 00:04:31.728127  826514 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 00:04:31.734759  826514 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (511 bytes)
	I0813 00:04:31.746715  826514 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 00:04:31.758298  826514 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2075 bytes)
	I0813 00:04:31.769853  826514 ssh_runner.go:149] Run: grep 192.168.39.22	control-plane.minikube.internal$ /etc/hosts
	I0813 00:04:31.773705  826514 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.22	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 00:04:31.783812  826514 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289 for IP: 192.168.39.22
	I0813 00:04:31.783864  826514 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key
	I0813 00:04:31.783891  826514 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key
	I0813 00:04:31.783950  826514 certs.go:294] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/client.key
	I0813 00:04:31.783972  826514 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/client.crt with IP's: []
	I0813 00:04:32.037921  826514 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/client.crt ...
	I0813 00:04:32.037950  826514 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/client.crt: {Name:mkf5df9641ea11c906574d810c1c29529a170608 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:04:32.038175  826514 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/client.key ...
	I0813 00:04:32.038188  826514 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/client.key: {Name:mkcd99f24d88fe2629fca2746c101b315deedb23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:04:32.038275  826514 certs.go:294] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/apiserver.key.a67f3e8b
	I0813 00:04:32.038289  826514 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/apiserver.crt.a67f3e8b with IP's: [192.168.39.22 10.96.0.1 127.0.0.1 10.0.0.1]
	I0813 00:04:32.287488  826514 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/apiserver.crt.a67f3e8b ...
	I0813 00:04:32.287529  826514 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/apiserver.crt.a67f3e8b: {Name:mkc518b64a186f076d19e8c89346facb1c87f59b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:04:32.287777  826514 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/apiserver.key.a67f3e8b ...
	I0813 00:04:32.287800  826514 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/apiserver.key.a67f3e8b: {Name:mk74b9e9b7324dd81ddf9c84974f48d24be1bf6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:04:32.287932  826514 certs.go:305] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/apiserver.crt.a67f3e8b -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/apiserver.crt
	I0813 00:04:32.288013  826514 certs.go:309] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/apiserver.key.a67f3e8b -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/apiserver.key
	I0813 00:04:32.288082  826514 certs.go:294] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/proxy-client.key
	I0813 00:04:32.288160  826514 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/proxy-client.crt with IP's: []
	I0813 00:04:32.394515  826514 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/proxy-client.crt ...
	I0813 00:04:32.394547  826514 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/proxy-client.crt: {Name:mkc269edc225dfbdbe858effb5699acd067027fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:04:32.394728  826514 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/proxy-client.key ...
	I0813 00:04:32.394741  826514 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/proxy-client.key: {Name:mkcaf59dee6bab4e8620b2e8ba22d6f73f0031eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:04:32.394817  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0813 00:04:32.394836  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0813 00:04:32.394845  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0813 00:04:32.394856  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0813 00:04:32.394865  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0813 00:04:32.394883  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0813 00:04:32.394895  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0813 00:04:32.394906  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0813 00:04:32.394960  826514 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/820289.pem (1338 bytes)
	W0813 00:04:32.395003  826514 certs.go:369] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/820289_empty.pem, impossibly tiny 0 bytes
	I0813 00:04:32.395019  826514 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 00:04:32.395044  826514 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem (1078 bytes)
	I0813 00:04:32.395075  826514 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem (1123 bytes)
	I0813 00:04:32.395099  826514 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem (1679 bytes)
	I0813 00:04:32.395140  826514 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/8202892.pem (1708 bytes)
	I0813 00:04:32.395168  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/8202892.pem -> /usr/share/ca-certificates/8202892.pem
	I0813 00:04:32.395182  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0813 00:04:32.395191  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/820289.pem -> /usr/share/ca-certificates/820289.pem
	I0813 00:04:32.396132  826514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 00:04:32.413144  826514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0813 00:04:32.429646  826514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 00:04:32.446026  826514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0813 00:04:32.463119  826514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 00:04:32.479069  826514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0813 00:04:32.494793  826514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 00:04:32.510640  826514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 00:04:32.526595  826514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/8202892.pem --> /usr/share/ca-certificates/8202892.pem (1708 bytes)
	I0813 00:04:32.543031  826514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 00:04:32.558714  826514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/820289.pem --> /usr/share/ca-certificates/820289.pem (1338 bytes)
	I0813 00:04:32.574494  826514 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 00:04:32.586113  826514 ssh_runner.go:149] Run: openssl version
	I0813 00:04:32.592091  826514 command_runner.go:124] > OpenSSL 1.1.1k  25 Mar 2021
	I0813 00:04:32.592152  826514 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8202892.pem && ln -fs /usr/share/ca-certificates/8202892.pem /etc/ssl/certs/8202892.pem"
	I0813 00:04:32.599913  826514 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8202892.pem
	I0813 00:04:32.604566  826514 command_runner.go:124] > -rw-r--r-- 1 root root 1708 Aug 12 23:59 /usr/share/ca-certificates/8202892.pem
	I0813 00:04:32.604598  826514 certs.go:416] hashing: -rw-r--r-- 1 root root 1708 Aug 12 23:59 /usr/share/ca-certificates/8202892.pem
	I0813 00:04:32.604653  826514 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8202892.pem
	I0813 00:04:32.610971  826514 command_runner.go:124] > 3ec20f2e
	I0813 00:04:32.611046  826514 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8202892.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 00:04:32.618917  826514 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 00:04:32.626707  826514 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 00:04:32.630915  826514 command_runner.go:124] > -rw-r--r-- 1 root root 1111 Aug 12 23:51 /usr/share/ca-certificates/minikubeCA.pem
	I0813 00:04:32.631085  826514 certs.go:416] hashing: -rw-r--r-- 1 root root 1111 Aug 12 23:51 /usr/share/ca-certificates/minikubeCA.pem
	I0813 00:04:32.631124  826514 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 00:04:32.636434  826514 command_runner.go:124] > b5213941
	I0813 00:04:32.636674  826514 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 00:04:32.644091  826514 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/820289.pem && ln -fs /usr/share/ca-certificates/820289.pem /etc/ssl/certs/820289.pem"
	I0813 00:04:32.651450  826514 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/820289.pem
	I0813 00:04:32.655613  826514 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Aug 12 23:59 /usr/share/ca-certificates/820289.pem
	I0813 00:04:32.655942  826514 certs.go:416] hashing: -rw-r--r-- 1 root root 1338 Aug 12 23:59 /usr/share/ca-certificates/820289.pem
	I0813 00:04:32.655980  826514 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/820289.pem
	I0813 00:04:32.661613  826514 command_runner.go:124] > 51391683
	I0813 00:04:32.662071  826514 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/820289.pem /etc/ssl/certs/51391683.0"
	I0813 00:04:32.669575  826514 kubeadm.go:390] StartCluster: {Name:multinode-20210813000359-820289 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12122/minikube-v1.22.0-1628238775-12122.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:multinode-
20210813000359-820289 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.22 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0}
	I0813 00:04:32.669649  826514 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0813 00:04:32.669681  826514 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 00:04:32.701020  826514 cri.go:76] found id: ""
	I0813 00:04:32.701074  826514 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 00:04:32.708039  826514 command_runner.go:124] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0813 00:04:32.708064  826514 command_runner.go:124] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0813 00:04:32.708094  826514 command_runner.go:124] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0813 00:04:32.708212  826514 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 00:04:32.714812  826514 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 00:04:32.721263  826514 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0813 00:04:32.721280  826514 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0813 00:04:32.721288  826514 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0813 00:04:32.721493  826514 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 00:04:32.721607  826514 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 00:04:32.721661  826514 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0813 00:04:32.865000  826514 command_runner.go:124] > [init] Using Kubernetes version: v1.21.3
	I0813 00:04:32.865108  826514 command_runner.go:124] > [preflight] Running pre-flight checks
	I0813 00:04:33.166852  826514 command_runner.go:124] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0813 00:04:33.166965  826514 command_runner.go:124] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0813 00:04:33.167104  826514 command_runner.go:124] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0813 00:04:33.372502  826514 out.go:204]   - Generating certificates and keys ...
	I0813 00:04:33.370376  826514 command_runner.go:124] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0813 00:04:33.372598  826514 command_runner.go:124] > [certs] Using existing ca certificate authority
	I0813 00:04:33.372695  826514 command_runner.go:124] > [certs] Using existing apiserver certificate and key on disk
	I0813 00:04:33.550935  826514 command_runner.go:124] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0813 00:04:33.821015  826514 command_runner.go:124] > [certs] Generating "front-proxy-ca" certificate and key
	I0813 00:04:34.075284  826514 command_runner.go:124] > [certs] Generating "front-proxy-client" certificate and key
	I0813 00:04:34.267752  826514 command_runner.go:124] > [certs] Generating "etcd/ca" certificate and key
	I0813 00:04:34.577030  826514 command_runner.go:124] > [certs] Generating "etcd/server" certificate and key
	I0813 00:04:34.925803  826514 command_runner.go:124] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-20210813000359-820289] and IPs [192.168.39.22 127.0.0.1 ::1]
	I0813 00:04:34.925891  826514 command_runner.go:124] > [certs] Generating "etcd/peer" certificate and key
	I0813 00:04:34.926093  826514 command_runner.go:124] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-20210813000359-820289] and IPs [192.168.39.22 127.0.0.1 ::1]
	I0813 00:04:34.926174  826514 command_runner.go:124] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0813 00:04:35.101291  826514 command_runner.go:124] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0813 00:04:35.556390  826514 command_runner.go:124] > [certs] Generating "sa" key and public key
	I0813 00:04:35.556493  826514 command_runner.go:124] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0813 00:04:35.706103  826514 command_runner.go:124] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0813 00:04:35.807380  826514 command_runner.go:124] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0813 00:04:35.916494  826514 command_runner.go:124] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0813 00:04:36.231264  826514 command_runner.go:124] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0813 00:04:36.259008  826514 command_runner.go:124] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0813 00:04:36.259149  826514 command_runner.go:124] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0813 00:04:36.259215  826514 command_runner.go:124] > [kubelet-start] Starting the kubelet
	I0813 00:04:36.453900  826514 out.go:204]   - Booting up control plane ...
	I0813 00:04:36.451905  826514 command_runner.go:124] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0813 00:04:36.454016  826514 command_runner.go:124] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0813 00:04:36.464094  826514 command_runner.go:124] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0813 00:04:36.466485  826514 command_runner.go:124] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0813 00:04:36.466564  826514 command_runner.go:124] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0813 00:04:36.473333  826514 command_runner.go:124] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0813 00:04:52.473160  826514 command_runner.go:124] > [apiclient] All control plane components are healthy after 16.004446 seconds
	I0813 00:04:52.473336  826514 command_runner.go:124] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0813 00:04:52.496367  826514 command_runner.go:124] > [kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
	I0813 00:04:53.035422  826514 command_runner.go:124] > [upload-certs] Skipping phase. Please see --upload-certs
	I0813 00:04:53.037726  826514 command_runner.go:124] > [mark-control-plane] Marking the node multinode-20210813000359-820289 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0813 00:04:53.548428  826514 out.go:204]   - Configuring RBAC rules ...
	I0813 00:04:53.546841  826514 command_runner.go:124] > [bootstrap-token] Using token: 2bpigu.aauxs97v3zmdhtlx
	I0813 00:04:53.548591  826514 command_runner.go:124] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0813 00:04:53.558336  826514 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0813 00:04:53.572463  826514 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0813 00:04:53.583375  826514 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0813 00:04:53.606316  826514 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0813 00:04:53.615312  826514 command_runner.go:124] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0813 00:04:53.631160  826514 command_runner.go:124] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0813 00:04:54.020398  826514 command_runner.go:124] > [addons] Applied essential addon: CoreDNS
	I0813 00:04:54.095132  826514 command_runner.go:124] > [addons] Applied essential addon: kube-proxy
	I0813 00:04:54.099089  826514 command_runner.go:124] > Your Kubernetes control-plane has initialized successfully!
	I0813 00:04:54.099199  826514 command_runner.go:124] > To start using your cluster, you need to run the following as a regular user:
	I0813 00:04:54.099237  826514 command_runner.go:124] >   mkdir -p $HOME/.kube
	I0813 00:04:54.099357  826514 command_runner.go:124] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0813 00:04:54.099459  826514 command_runner.go:124] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0813 00:04:54.099574  826514 command_runner.go:124] > Alternatively, if you are the root user, you can run:
	I0813 00:04:54.099639  826514 command_runner.go:124] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0813 00:04:54.099693  826514 command_runner.go:124] > You should now deploy a pod network to the cluster.
	I0813 00:04:54.099783  826514 command_runner.go:124] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0813 00:04:54.099848  826514 command_runner.go:124] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0813 00:04:54.099928  826514 command_runner.go:124] > You can now join any number of control-plane nodes by copying certificate authorities
	I0813 00:04:54.099994  826514 command_runner.go:124] > and service account keys on each node and then running the following as root:
	I0813 00:04:54.100067  826514 command_runner.go:124] >   kubeadm join control-plane.minikube.internal:8443 --token 2bpigu.aauxs97v3zmdhtlx \
	I0813 00:04:54.100157  826514 command_runner.go:124] > 	--discovery-token-ca-cert-hash sha256:a2926d5accd9a2d1096d4e62979978bfd5a94255856b68d015a34969efd36535 \
	I0813 00:04:54.100183  826514 command_runner.go:124] > 	--control-plane 
	I0813 00:04:54.100294  826514 command_runner.go:124] > Then you can join any number of worker nodes by running the following on each as root:
	I0813 00:04:54.100375  826514 command_runner.go:124] > kubeadm join control-plane.minikube.internal:8443 --token 2bpigu.aauxs97v3zmdhtlx \
	I0813 00:04:54.100518  826514 command_runner.go:124] > 	--discovery-token-ca-cert-hash sha256:a2926d5accd9a2d1096d4e62979978bfd5a94255856b68d015a34969efd36535 
	I0813 00:04:54.101340  826514 command_runner.go:124] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0813 00:04:54.101797  826514 cni.go:93] Creating CNI manager for ""
	I0813 00:04:54.101817  826514 cni.go:154] 1 nodes found, recommending kindnet
	I0813 00:04:54.103654  826514 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0813 00:04:54.103770  826514 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0813 00:04:54.113142  826514 command_runner.go:124] >   File: /opt/cni/bin/portmap
	I0813 00:04:54.113166  826514 command_runner.go:124] >   Size: 2853400   	Blocks: 5576       IO Block: 4096   regular file
	I0813 00:04:54.113176  826514 command_runner.go:124] > Device: 10h/16d	Inode: 22646       Links: 1
	I0813 00:04:54.113186  826514 command_runner.go:124] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0813 00:04:54.113194  826514 command_runner.go:124] > Access: 2021-08-13 00:04:13.266164804 +0000
	I0813 00:04:54.113204  826514 command_runner.go:124] > Modify: 2021-08-06 09:23:24.000000000 +0000
	I0813 00:04:54.113212  826514 command_runner.go:124] > Change: 2021-08-13 00:04:09.548164804 +0000
	I0813 00:04:54.113229  826514 command_runner.go:124] >  Birth: -
	I0813 00:04:54.113283  826514 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0813 00:04:54.113297  826514 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0813 00:04:54.145305  826514 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0813 00:04:54.641847  826514 command_runner.go:124] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0813 00:04:54.641885  826514 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0813 00:04:54.641895  826514 command_runner.go:124] > serviceaccount/kindnet created
	I0813 00:04:54.641902  826514 command_runner.go:124] > daemonset.apps/kindnet created
	I0813 00:04:54.641976  826514 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 00:04:54.642097  826514 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:04:54.642118  826514 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=dc1c3ca26e9449ce488a773126b8450402c94a19 minikube.k8s.io/name=multinode-20210813000359-820289 minikube.k8s.io/updated_at=2021_08_13T00_04_54_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:04:54.669213  826514 command_runner.go:124] > -16
	I0813 00:04:54.669288  826514 ops.go:34] apiserver oom_adj: -16
	I0813 00:04:54.792518  826514 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0813 00:04:54.792973  826514 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:04:54.833553  826514 command_runner.go:124] > node/multinode-20210813000359-820289 labeled
	I0813 00:04:54.912960  826514 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:04:55.414157  826514 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:04:55.516079  826514 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:04:55.913637  826514 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:04:56.014242  826514 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:04:56.413696  826514 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:04:56.514073  826514 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:04:56.913654  826514 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:04:57.021844  826514 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:04:57.414343  826514 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:04:57.520853  826514 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:04:57.913796  826514 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:04:58.012866  826514 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:04:58.414564  826514 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:04:58.515899  826514 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:04:58.913609  826514 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:04:59.024558  826514 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:04:59.413652  826514 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:04:59.714571  826514 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:04:59.914021  826514 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:05:00.018402  826514 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:05:00.413536  826514 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:05:00.524937  826514 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:05:00.913812  826514 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:05:01.025033  826514 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:05:01.414269  826514 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:05:01.523883  826514 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:05:01.914430  826514 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:05:02.043133  826514 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:05:02.413664  826514 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:05:02.521870  826514 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:05:02.914539  826514 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:05:03.022988  826514 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:05:03.414311  826514 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:05:03.513325  826514 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:05:03.914133  826514 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:05:04.023796  826514 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:05:04.413681  826514 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:05:04.508698  826514 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:05:04.913739  826514 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:05:05.074859  826514 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:05:05.413518  826514 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:05:05.539828  826514 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:05:05.914163  826514 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:05:06.044113  826514 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:05:06.414297  826514 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:05:06.651694  826514 command_runner.go:124] > NAME      SECRETS   AGE
	I0813 00:05:06.651751  826514 command_runner.go:124] > default   1         0s
	I0813 00:05:06.654454  826514 kubeadm.go:985] duration metric: took 12.012401848s to wait for elevateKubeSystemPrivileges.
	I0813 00:05:06.654484  826514 kubeadm.go:392] StartCluster complete in 33.984914614s
	I0813 00:05:06.654509  826514 settings.go:142] acquiring lock: {Name:mk8798f78c6f0a1d20052a3e99a18e56ee754eb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:05:06.654646  826514 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0813 00:05:06.656045  826514 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig: {Name:mk56dc63045ab5614dcc5cc2eaf1f7d3442c655e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:05:06.656627  826514 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0813 00:05:06.656994  826514 kapi.go:59] client config for multinode-20210813000359-820289: &rest.Config{Host:"https://192.168.39.22:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359
-820289/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 00:05:06.657620  826514 cert_rotation.go:137] Starting client certificate rotation controller
	I0813 00:05:06.659270  826514 round_trippers.go:432] GET https://192.168.39.22:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0813 00:05:06.659291  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:06.659298  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:06.659303  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:06.671634  826514 round_trippers.go:457] Response Status: 200 OK in 12 milliseconds
	I0813 00:05:06.671659  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:06.671666  826514 round_trippers.go:463]     Content-Length: 291
	I0813 00:05:06.671671  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:06 GMT
	I0813 00:05:06.671675  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:06.671679  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:06.671684  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:06.671689  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:06.672497  826514 request.go:1123] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"1e1de93b-f119-4736-9f21-896671cf8b78","resourceVersion":"415","creationTimestamp":"2021-08-13T00:04:53Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0813 00:05:06.673363  826514 request.go:1123] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"1e1de93b-f119-4736-9f21-896671cf8b78","resourceVersion":"415","creationTimestamp":"2021-08-13T00:04:53Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0813 00:05:06.673436  826514 round_trippers.go:432] PUT https://192.168.39.22:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0813 00:05:06.673450  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:06.673457  826514 round_trippers.go:442]     Content-Type: application/json
	I0813 00:05:06.673463  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:06.673470  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:06.698684  826514 round_trippers.go:457] Response Status: 200 OK in 25 milliseconds
	I0813 00:05:06.698702  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:06.698708  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:06.698713  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:06.698720  826514 round_trippers.go:463]     Content-Length: 291
	I0813 00:05:06.698727  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:06 GMT
	I0813 00:05:06.698738  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:06.698742  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:06.698763  826514 request.go:1123] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"1e1de93b-f119-4736-9f21-896671cf8b78","resourceVersion":"419","creationTimestamp":"2021-08-13T00:04:53Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0813 00:05:07.199474  826514 round_trippers.go:432] GET https://192.168.39.22:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0813 00:05:07.199504  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:07.199510  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:07.199515  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:07.201870  826514 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:05:07.201894  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:07.201901  826514 round_trippers.go:463]     Content-Length: 291
	I0813 00:05:07.201906  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:07 GMT
	I0813 00:05:07.201913  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:07.201918  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:07.201922  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:07.201927  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:07.201956  826514 request.go:1123] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"1e1de93b-f119-4736-9f21-896671cf8b78","resourceVersion":"448","creationTimestamp":"2021-08-13T00:04:53Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0813 00:05:07.202081  826514 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "multinode-20210813000359-820289" rescaled to 1
	I0813 00:05:07.202138  826514 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.39.22 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 00:05:07.204288  826514 out.go:177] * Verifying Kubernetes components...
	I0813 00:05:07.202214  826514 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 00:05:07.202239  826514 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0813 00:05:07.204366  826514 addons.go:59] Setting storage-provisioner=true in profile "multinode-20210813000359-820289"
	I0813 00:05:07.204388  826514 addons.go:59] Setting default-storageclass=true in profile "multinode-20210813000359-820289"
	I0813 00:05:07.204404  826514 addons.go:135] Setting addon storage-provisioner=true in "multinode-20210813000359-820289"
	I0813 00:05:07.204411  826514 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-20210813000359-820289"
	W0813 00:05:07.204415  826514 addons.go:147] addon storage-provisioner should already be in state true
	I0813 00:05:07.204447  826514 host.go:66] Checking if "multinode-20210813000359-820289" exists ...
	I0813 00:05:07.204370  826514 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 00:05:07.204932  826514 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 00:05:07.204943  826514 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 00:05:07.204976  826514 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 00:05:07.205070  826514 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 00:05:07.216734  826514 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:32819
	I0813 00:05:07.217267  826514 main.go:130] libmachine: () Calling .GetVersion
	I0813 00:05:07.217862  826514 main.go:130] libmachine: Using API Version  1
	I0813 00:05:07.217890  826514 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 00:05:07.218272  826514 main.go:130] libmachine: () Calling .GetMachineName
	I0813 00:05:07.218485  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetState
	I0813 00:05:07.220513  826514 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45837
	I0813 00:05:07.220904  826514 main.go:130] libmachine: () Calling .GetVersion
	I0813 00:05:07.221359  826514 main.go:130] libmachine: Using API Version  1
	I0813 00:05:07.221386  826514 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 00:05:07.221754  826514 main.go:130] libmachine: () Calling .GetMachineName
	I0813 00:05:07.222356  826514 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 00:05:07.222406  826514 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 00:05:07.222638  826514 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0813 00:05:07.223015  826514 kapi.go:59] client config for multinode-20210813000359-820289: &rest.Config{Host:"https://192.168.39.22:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359
-820289/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 00:05:07.224934  826514 round_trippers.go:432] GET https://192.168.39.22:8443/apis/storage.k8s.io/v1/storageclasses
	I0813 00:05:07.224958  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:07.224967  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:07.224973  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:07.229856  826514 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0813 00:05:07.229876  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:07.229882  826514 round_trippers.go:463]     Content-Length: 109
	I0813 00:05:07.229888  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:07 GMT
	I0813 00:05:07.229893  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:07.229898  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:07.229912  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:07.229917  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:07.229938  826514 request.go:1123] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"448"},"items":[]}
	I0813 00:05:07.230757  826514 addons.go:135] Setting addon default-storageclass=true in "multinode-20210813000359-820289"
	W0813 00:05:07.230783  826514 addons.go:147] addon default-storageclass should already be in state true
	I0813 00:05:07.230814  826514 host.go:66] Checking if "multinode-20210813000359-820289" exists ...
	I0813 00:05:07.231218  826514 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 00:05:07.231262  826514 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 00:05:07.234001  826514 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34377
	I0813 00:05:07.234429  826514 main.go:130] libmachine: () Calling .GetVersion
	I0813 00:05:07.234893  826514 main.go:130] libmachine: Using API Version  1
	I0813 00:05:07.234919  826514 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 00:05:07.235330  826514 main.go:130] libmachine: () Calling .GetMachineName
	I0813 00:05:07.235548  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetState
	I0813 00:05:07.238762  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .DriverName
	I0813 00:05:07.240913  826514 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 00:05:07.241036  826514 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 00:05:07.241054  826514 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 00:05:07.241073  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHHostname
	I0813 00:05:07.243327  826514 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43461
	I0813 00:05:07.243776  826514 main.go:130] libmachine: () Calling .GetVersion
	I0813 00:05:07.244257  826514 main.go:130] libmachine: Using API Version  1
	I0813 00:05:07.244286  826514 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 00:05:07.244667  826514 main.go:130] libmachine: () Calling .GetMachineName
	I0813 00:05:07.245206  826514 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 00:05:07.245261  826514 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 00:05:07.246919  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:05:07.247418  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e4:55", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:04:13 +0000 UTC Type:0 Mac:52:54:00:b5:e4:55 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:multinode-20210813000359-820289 Clientid:01:52:54:00:b5:e4:55}
	I0813 00:05:07.247457  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined IP address 192.168.39.22 and MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:05:07.247589  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHPort
	I0813 00:05:07.247794  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHKeyPath
	I0813 00:05:07.247971  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHUsername
	I0813 00:05:07.248118  826514 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813000359-820289/id_rsa Username:docker}
	I0813 00:05:07.256451  826514 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:38471
	I0813 00:05:07.256858  826514 main.go:130] libmachine: () Calling .GetVersion
	I0813 00:05:07.257396  826514 main.go:130] libmachine: Using API Version  1
	I0813 00:05:07.257419  826514 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 00:05:07.257799  826514 main.go:130] libmachine: () Calling .GetMachineName
	I0813 00:05:07.257985  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetState
	I0813 00:05:07.261030  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .DriverName
	I0813 00:05:07.261242  826514 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 00:05:07.261256  826514 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 00:05:07.261270  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHHostname
	I0813 00:05:07.266322  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:05:07.266735  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e4:55", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:04:13 +0000 UTC Type:0 Mac:52:54:00:b5:e4:55 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:multinode-20210813000359-820289 Clientid:01:52:54:00:b5:e4:55}
	I0813 00:05:07.266760  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined IP address 192.168.39.22 and MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:05:07.266929  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHPort
	I0813 00:05:07.267109  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHKeyPath
	I0813 00:05:07.267250  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHUsername
	I0813 00:05:07.267408  826514 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813000359-820289/id_rsa Username:docker}
	I0813 00:05:07.437584  826514 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 00:05:07.535996  826514 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 00:05:07.538887  826514 command_runner.go:124] > apiVersion: v1
	I0813 00:05:07.538907  826514 command_runner.go:124] > data:
	I0813 00:05:07.538913  826514 command_runner.go:124] >   Corefile: |
	I0813 00:05:07.538919  826514 command_runner.go:124] >     .:53 {
	I0813 00:05:07.538926  826514 command_runner.go:124] >         errors
	I0813 00:05:07.538933  826514 command_runner.go:124] >         health {
	I0813 00:05:07.538950  826514 command_runner.go:124] >            lameduck 5s
	I0813 00:05:07.538955  826514 command_runner.go:124] >         }
	I0813 00:05:07.538964  826514 command_runner.go:124] >         ready
	I0813 00:05:07.538974  826514 command_runner.go:124] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0813 00:05:07.538986  826514 command_runner.go:124] >            pods insecure
	I0813 00:05:07.538995  826514 command_runner.go:124] >            fallthrough in-addr.arpa ip6.arpa
	I0813 00:05:07.539002  826514 command_runner.go:124] >            ttl 30
	I0813 00:05:07.539008  826514 command_runner.go:124] >         }
	I0813 00:05:07.539015  826514 command_runner.go:124] >         prometheus :9153
	I0813 00:05:07.539022  826514 command_runner.go:124] >         forward . /etc/resolv.conf {
	I0813 00:05:07.539031  826514 command_runner.go:124] >            max_concurrent 1000
	I0813 00:05:07.539036  826514 command_runner.go:124] >         }
	I0813 00:05:07.539044  826514 command_runner.go:124] >         cache 30
	I0813 00:05:07.539049  826514 command_runner.go:124] >         loop
	I0813 00:05:07.539056  826514 command_runner.go:124] >         reload
	I0813 00:05:07.539061  826514 command_runner.go:124] >         loadbalance
	I0813 00:05:07.539066  826514 command_runner.go:124] >     }
	I0813 00:05:07.539070  826514 command_runner.go:124] > kind: ConfigMap
	I0813 00:05:07.539080  826514 command_runner.go:124] > metadata:
	I0813 00:05:07.539125  826514 command_runner.go:124] >   creationTimestamp: "2021-08-13T00:04:53Z"
	I0813 00:05:07.539135  826514 command_runner.go:124] >   name: coredns
	I0813 00:05:07.539139  826514 command_runner.go:124] >   namespace: kube-system
	I0813 00:05:07.539143  826514 command_runner.go:124] >   resourceVersion: "272"
	I0813 00:05:07.539149  826514 command_runner.go:124] >   uid: df8ecb27-57a8-4d1a-9ddc-10804cd545c7
	I0813 00:05:07.539292  826514 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 00:05:07.539675  826514 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0813 00:05:07.540039  826514 kapi.go:59] client config for multinode-20210813000359-820289: &rest.Config{Host:"https://192.168.39.22:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359
-820289/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 00:05:07.542132  826514 node_ready.go:35] waiting up to 6m0s for node "multinode-20210813000359-820289" to be "Ready" ...
	I0813 00:05:07.542261  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289
	I0813 00:05:07.542280  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:07.542288  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:07.542295  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:07.544528  826514 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:05:07.544549  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:07.544555  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:07.544561  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:07.544565  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:07.544570  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:07.544575  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:07 GMT
	I0813 00:05:07.544776  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","resourceVersion":"378","creationTimestamp":"2021-08-13T00:04:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813000359-820289","minikube.k8s.io/updated_at":"2021_08_13T00_04_54_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6557 chars]
	I0813 00:05:07.546601  826514 node_ready.go:49] node "multinode-20210813000359-820289" has status "Ready":"True"
	I0813 00:05:07.546631  826514 node_ready.go:38] duration metric: took 4.467398ms waiting for node "multinode-20210813000359-820289" to be "Ready" ...
	I0813 00:05:07.546646  826514 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 00:05:07.546760  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods
	I0813 00:05:07.546778  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:07.546788  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:07.546814  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:07.552626  826514 round_trippers.go:457] Response Status: 200 OK in 5 milliseconds
	I0813 00:05:07.552646  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:07.552653  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:07.552658  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:07.552668  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:07.552673  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:07.552678  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:07 GMT
	I0813 00:05:07.553376  826514 request.go:1123] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"coredns-558bd4d5db-rgwt6","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"5bd6a56d-1eb3-4e0f-b537-15d0b4b176f9","resourceVersion":"443","creationTimestamp":"2021-08-13T00:05:06Z","deletionTimestamp":"2021-08-13T00:05:36Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"65fbbea7-e9b5-486b-a329-69ff09468fc0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65fbbea7-e9b5-486b-a329-69ff0946
8fc0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:control [truncated 51943 chars]
	I0813 00:05:07.562062  826514 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-rgwt6" in "kube-system" namespace to be "Ready" ...
	I0813 00:05:07.562147  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-rgwt6
	I0813 00:05:07.562161  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:07.562170  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:07.562180  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:07.567427  826514 round_trippers.go:457] Response Status: 200 OK in 5 milliseconds
	I0813 00:05:07.567441  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:07.567445  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:07.567448  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:07.567451  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:07.567456  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:07.567460  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:07 GMT
	I0813 00:05:07.568485  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-rgwt6","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"5bd6a56d-1eb3-4e0f-b537-15d0b4b176f9","resourceVersion":"443","creationTimestamp":"2021-08-13T00:05:06Z","deletionTimestamp":"2021-08-13T00:05:36Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"65fbbea7-e9b5-486b-a329-69ff09468fc0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65fbbea7-e9b5-486b-a329-69ff09468fc0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDel
etion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:sp [truncated 4344 chars]
	I0813 00:05:07.571691  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289
	I0813 00:05:07.571734  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:07.571747  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:07.571753  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:07.574113  826514 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:05:07.574132  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:07.574139  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:07.574146  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:07.574157  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:07.574162  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:07.574167  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:07 GMT
	I0813 00:05:07.574401  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","resourceVersion":"378","creationTimestamp":"2021-08-13T00:04:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813000359-820289","minikube.k8s.io/updated_at":"2021_08_13T00_04_54_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6557 chars]
	I0813 00:05:08.075586  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-rgwt6
	I0813 00:05:08.075625  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:08.075633  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:08.075638  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:08.082676  826514 round_trippers.go:457] Response Status: 200 OK in 7 milliseconds
	I0813 00:05:08.082696  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:08.082700  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:08.082703  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:08.082712  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:08.082717  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:08.082721  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:08 GMT
	I0813 00:05:08.083793  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-rgwt6","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"5bd6a56d-1eb3-4e0f-b537-15d0b4b176f9","resourceVersion":"443","creationTimestamp":"2021-08-13T00:05:06Z","deletionTimestamp":"2021-08-13T00:05:36Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"65fbbea7-e9b5-486b-a329-69ff09468fc0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65fbbea7-e9b5-486b-a329-69ff09468fc0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDel
etion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:sp [truncated 4344 chars]
	I0813 00:05:08.084098  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289
	I0813 00:05:08.084112  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:08.084117  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:08.084121  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:08.090615  826514 round_trippers.go:457] Response Status: 200 OK in 6 milliseconds
	I0813 00:05:08.090636  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:08.090643  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:08.090648  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:08.090653  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:08 GMT
	I0813 00:05:08.090658  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:08.090662  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:08.091007  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","resourceVersion":"378","creationTimestamp":"2021-08-13T00:04:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813000359-820289","minikube.k8s.io/updated_at":"2021_08_13T00_04_54_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6557 chars]
	I0813 00:05:08.575831  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-rgwt6
	I0813 00:05:08.575865  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:08.575874  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:08.575880  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:08.578832  826514 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:05:08.578852  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:08.578859  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:08.578863  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:08.578867  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:08.578875  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:08 GMT
	I0813 00:05:08.578879  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:08.578965  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-rgwt6","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"5bd6a56d-1eb3-4e0f-b537-15d0b4b176f9","resourceVersion":"443","creationTimestamp":"2021-08-13T00:05:06Z","deletionTimestamp":"2021-08-13T00:05:36Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"65fbbea7-e9b5-486b-a329-69ff09468fc0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65fbbea7-e9b5-486b-a329-69ff09468fc0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDel
etion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:sp [truncated 4344 chars]
	I0813 00:05:08.579265  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289
	I0813 00:05:08.579282  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:08.579289  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:08.579295  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:08.582567  826514 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 00:05:08.582587  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:08.582592  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:08.582595  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:08.582598  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:08.582602  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:08.582605  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:08 GMT
	I0813 00:05:08.582814  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","resourceVersion":"378","creationTimestamp":"2021-08-13T00:04:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813000359-820289","minikube.k8s.io/updated_at":"2021_08_13T00_04_54_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6557 chars]
	I0813 00:05:09.015561  826514 command_runner.go:124] > serviceaccount/storage-provisioner created
	I0813 00:05:09.036749  826514 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0813 00:05:09.055361  826514 command_runner.go:124] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0813 00:05:09.073972  826514 command_runner.go:124] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0813 00:05:09.075082  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-rgwt6
	I0813 00:05:09.075098  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:09.075103  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:09.075108  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:09.078011  826514 round_trippers.go:457] Response Status: 404 Not Found in 2 milliseconds
	I0813 00:05:09.078032  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:09.078039  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:09 GMT
	I0813 00:05:09.078045  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:09.078050  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:09.078058  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:09.078063  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:09.078068  826514 round_trippers.go:463]     Content-Length: 216
	I0813 00:05:09.078092  826514 request.go:1123] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods \"coredns-558bd4d5db-rgwt6\" not found","reason":"NotFound","details":{"name":"coredns-558bd4d5db-rgwt6","kind":"pods"},"code":404}
	I0813 00:05:09.078826  826514 pod_ready.go:97] error getting pod "coredns-558bd4d5db-rgwt6" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-rgwt6" not found
	I0813 00:05:09.078861  826514 pod_ready.go:81] duration metric: took 1.516767371s waiting for pod "coredns-558bd4d5db-rgwt6" in "kube-system" namespace to be "Ready" ...
	E0813 00:05:09.078878  826514 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-558bd4d5db-rgwt6" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-rgwt6" not found
	I0813 00:05:09.078887  826514 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-sstrb" in "kube-system" namespace to be "Ready" ...
	I0813 00:05:09.078951  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-sstrb
	I0813 00:05:09.078962  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:09.078968  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:09.078974  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:09.085109  826514 command_runner.go:124] > endpoints/k8s.io-minikube-hostpath created
	I0813 00:05:09.087697  826514 round_trippers.go:457] Response Status: 200 OK in 8 milliseconds
	I0813 00:05:09.087725  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:09.087731  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:09.087736  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:09.087743  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:09.087752  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:09.087764  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:09 GMT
	I0813 00:05:09.087942  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-sstrb","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"16f6c77d-26a2-47e7-9c19-74736961cc13","resourceVersion":"455","creationTimestamp":"2021-08-13T00:05:06Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"65fbbea7-e9b5-486b-a329-69ff09468fc0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65fbbea7-e9b5-486b-a329-69ff09468fc0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5629 chars]
	I0813 00:05:09.088353  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289
	I0813 00:05:09.088379  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:09.088386  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:09.088392  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:09.091901  826514 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 00:05:09.091918  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:09.091924  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:09.091929  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:09 GMT
	I0813 00:05:09.091933  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:09.091938  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:09.091941  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:09.092323  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","resourceVersion":"378","creationTimestamp":"2021-08-13T00:04:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813000359-820289","minikube.k8s.io/updated_at":"2021_08_13T00_04_54_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6557 chars]
	I0813 00:05:09.120148  826514 command_runner.go:124] > pod/storage-provisioner created
	I0813 00:05:09.126027  826514 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.688404152s)
	I0813 00:05:09.126089  826514 main.go:130] libmachine: Making call to close driver server
	I0813 00:05:09.126106  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .Close
	I0813 00:05:09.126379  826514 main.go:130] libmachine: Successfully made call to close driver server
	I0813 00:05:09.126398  826514 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 00:05:09.126414  826514 main.go:130] libmachine: Making call to close driver server
	I0813 00:05:09.126429  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .Close
	I0813 00:05:09.126428  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | Closing plugin on server side
	I0813 00:05:09.126659  826514 main.go:130] libmachine: Successfully made call to close driver server
	I0813 00:05:09.126678  826514 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 00:05:09.174388  826514 command_runner.go:124] > storageclass.storage.k8s.io/standard created
	I0813 00:05:09.178650  826514 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.642611617s)
	I0813 00:05:09.178693  826514 main.go:130] libmachine: Making call to close driver server
	I0813 00:05:09.178705  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .Close
	I0813 00:05:09.178957  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | Closing plugin on server side
	I0813 00:05:09.178984  826514 main.go:130] libmachine: Successfully made call to close driver server
	I0813 00:05:09.179017  826514 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 00:05:09.179025  826514 main.go:130] libmachine: Making call to close driver server
	I0813 00:05:09.179034  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .Close
	I0813 00:05:09.179300  826514 main.go:130] libmachine: Successfully made call to close driver server
	I0813 00:05:09.179324  826514 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 00:05:09.179324  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | Closing plugin on server side
	I0813 00:05:09.179339  826514 main.go:130] libmachine: Making call to close driver server
	I0813 00:05:09.179355  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .Close
	I0813 00:05:09.179599  826514 main.go:130] libmachine: Successfully made call to close driver server
	I0813 00:05:09.179613  826514 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 00:05:09.181826  826514 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0813 00:05:09.181846  826514 addons.go:344] enableAddons completed in 1.979615938s
	I0813 00:05:09.212788  826514 command_runner.go:124] > configmap/coredns replaced
	I0813 00:05:09.212832  826514 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.673513134s)
	I0813 00:05:09.212853  826514 start.go:736] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS
	I0813 00:05:09.593598  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-sstrb
	I0813 00:05:09.593632  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:09.593641  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:09.593656  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:09.597341  826514 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 00:05:09.597364  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:09.597371  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:09.597375  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:09.597380  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:09.597384  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:09.597388  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:09 GMT
	I0813 00:05:09.597885  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-sstrb","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"16f6c77d-26a2-47e7-9c19-74736961cc13","resourceVersion":"455","creationTimestamp":"2021-08-13T00:05:06Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"65fbbea7-e9b5-486b-a329-69ff09468fc0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65fbbea7-e9b5-486b-a329-69ff09468fc0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5629 chars]
	I0813 00:05:09.598307  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289
	I0813 00:05:09.598363  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:09.598370  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:09.598375  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:09.601146  826514 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:05:09.601171  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:09.601178  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:09.601183  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:09.601188  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:09.601192  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:09.601197  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:09 GMT
	I0813 00:05:09.602228  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","resourceVersion":"378","creationTimestamp":"2021-08-13T00:04:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813000359-820289","minikube.k8s.io/updated_at":"2021_08_13T00_04_54_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6557 chars]
	I0813 00:05:10.093298  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-sstrb
	I0813 00:05:10.093332  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:10.093340  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:10.093347  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:10.096086  826514 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:05:10.096107  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:10.096114  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:10.096118  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:10.096123  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:10.096127  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:10.096131  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:10 GMT
	I0813 00:05:10.096404  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-sstrb","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"16f6c77d-26a2-47e7-9c19-74736961cc13","resourceVersion":"455","creationTimestamp":"2021-08-13T00:05:06Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"65fbbea7-e9b5-486b-a329-69ff09468fc0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65fbbea7-e9b5-486b-a329-69ff09468fc0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5629 chars]
	I0813 00:05:10.096747  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289
	I0813 00:05:10.096760  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:10.096765  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:10.096769  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:10.098955  826514 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:05:10.098993  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:10.098999  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:10.099003  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:10.099007  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:10.099012  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:10.099016  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:10 GMT
	I0813 00:05:10.099217  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","resourceVersion":"378","creationTimestamp":"2021-08-13T00:04:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813000359-820289","minikube.k8s.io/updated_at":"2021_08_13T00_04_54_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6557 chars]
	I0813 00:05:10.592897  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-sstrb
	I0813 00:05:10.592928  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:10.592936  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:10.592942  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:10.596527  826514 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 00:05:10.596545  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:10.596550  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:10.596553  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:10.596556  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:10.596559  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:10 GMT
	I0813 00:05:10.596567  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:10.596758  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-sstrb","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"16f6c77d-26a2-47e7-9c19-74736961cc13","resourceVersion":"480","creationTimestamp":"2021-08-13T00:05:06Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"65fbbea7-e9b5-486b-a329-69ff09468fc0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65fbbea7-e9b5-486b-a329-69ff09468fc0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5734 chars]
	I0813 00:05:10.597112  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289
	I0813 00:05:10.597128  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:10.597135  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:10.597141  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:10.598973  826514 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:05:10.598982  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:10.598986  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:10.598989  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:10.598992  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:10.598995  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:10.598997  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:10 GMT
	I0813 00:05:10.599352  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","resourceVersion":"378","creationTimestamp":"2021-08-13T00:04:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813000359-820289","minikube.k8s.io/updated_at":"2021_08_13T00_04_54_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6557 chars]
	I0813 00:05:10.599734  826514 pod_ready.go:92] pod "coredns-558bd4d5db-sstrb" in "kube-system" namespace has status "Ready":"True"
	I0813 00:05:10.599762  826514 pod_ready.go:81] duration metric: took 1.520861701s waiting for pod "coredns-558bd4d5db-sstrb" in "kube-system" namespace to be "Ready" ...
	I0813 00:05:10.599776  826514 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-20210813000359-820289" in "kube-system" namespace to be "Ready" ...
	I0813 00:05:10.599844  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210813000359-820289
	I0813 00:05:10.599856  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:10.599863  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:10.599868  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:10.602087  826514 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:05:10.602101  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:10.602106  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:10.602111  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:10.602115  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:10.602119  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:10.602123  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:10 GMT
	I0813 00:05:10.602319  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210813000359-820289","namespace":"kube-system","uid":"2d8ff24a-3267-4d8b-a528-3da3d3b70e54","resourceVersion":"330","creationTimestamp":"2021-08-13T00:04:59Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.22:2379","kubernetes.io/config.hash":"bc4647bddd439c8d0983a3b358a72513","kubernetes.io/config.mirror":"bc4647bddd439c8d0983a3b358a72513","kubernetes.io/config.seen":"2021-08-13T00:04:59.185501301Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.
kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.ha [truncated 5574 chars]
	I0813 00:05:10.602638  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289
	I0813 00:05:10.602651  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:10.602657  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:10.602663  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:10.605574  826514 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:05:10.605591  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:10.605597  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:10.605602  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:10.605606  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:10.605610  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:10.605614  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:10 GMT
	I0813 00:05:10.605774  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","resourceVersion":"378","creationTimestamp":"2021-08-13T00:04:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813000359-820289","minikube.k8s.io/updated_at":"2021_08_13T00_04_54_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6557 chars]
	I0813 00:05:10.605980  826514 pod_ready.go:92] pod "etcd-multinode-20210813000359-820289" in "kube-system" namespace has status "Ready":"True"
	I0813 00:05:10.605991  826514 pod_ready.go:81] duration metric: took 6.206986ms waiting for pod "etcd-multinode-20210813000359-820289" in "kube-system" namespace to be "Ready" ...
	I0813 00:05:10.606002  826514 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-20210813000359-820289" in "kube-system" namespace to be "Ready" ...
	I0813 00:05:10.606042  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20210813000359-820289
	I0813 00:05:10.606050  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:10.606055  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:10.606059  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:10.609521  826514 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 00:05:10.609534  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:10.609539  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:10.609546  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:10 GMT
	I0813 00:05:10.609551  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:10.609556  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:10.609561  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:10.610508  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20210813000359-820289","namespace":"kube-system","uid":"b5954b4a-9e51-488b-a0fa-cacb7de86621","resourceVersion":"450","creationTimestamp":"2021-08-13T00:04:59Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.22:8443","kubernetes.io/config.hash":"e0dcc263218298eb0bc9dd91ad6c2c6d","kubernetes.io/config.mirror":"e0dcc263218298eb0bc9dd91ad6c2c6d","kubernetes.io/config.seen":"2021-08-13T00:04:59.185603315Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-addre [truncated 7252 chars]
	I0813 00:05:10.610772  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289
	I0813 00:05:10.610787  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:10.610793  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:10.610798  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:10.613230  826514 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:05:10.613242  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:10.613246  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:10.613250  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:10.613253  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:10 GMT
	I0813 00:05:10.613255  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:10.613258  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:10.613492  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","resourceVersion":"378","creationTimestamp":"2021-08-13T00:04:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813000359-820289","minikube.k8s.io/updated_at":"2021_08_13T00_04_54_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6557 chars]
	I0813 00:05:10.613687  826514 pod_ready.go:92] pod "kube-apiserver-multinode-20210813000359-820289" in "kube-system" namespace has status "Ready":"True"
	I0813 00:05:10.613699  826514 pod_ready.go:81] duration metric: took 7.690934ms waiting for pod "kube-apiserver-multinode-20210813000359-820289" in "kube-system" namespace to be "Ready" ...
	I0813 00:05:10.613707  826514 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-20210813000359-820289" in "kube-system" namespace to be "Ready" ...
	I0813 00:05:10.613750  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210813000359-820289
	I0813 00:05:10.613758  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:10.613762  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:10.613765  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:10.622909  826514 round_trippers.go:457] Response Status: 200 OK in 9 milliseconds
	I0813 00:05:10.622925  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:10.622931  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:10.622936  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:10.622941  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:10.622946  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:10.622951  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:10 GMT
	I0813 00:05:10.623955  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20210813000359-820289","namespace":"kube-system","uid":"f25a529b-df04-44a7-aa11-5f04f8acaaf9","resourceVersion":"452","creationTimestamp":"2021-08-13T00:04:53Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8e7d0bc335d72432dc6bd22d4541dfbd","kubernetes.io/config.mirror":"8e7d0bc335d72432dc6bd22d4541dfbd","kubernetes.io/config.seen":"2021-08-13T00:04:42.246742645Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con
fig.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config [truncated 6813 chars]
	I0813 00:05:10.624225  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289
	I0813 00:05:10.624236  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:10.624241  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:10.624245  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:10.639185  826514 round_trippers.go:457] Response Status: 200 OK in 14 milliseconds
	I0813 00:05:10.639202  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:10.639208  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:10.639212  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:10.639217  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:10 GMT
	I0813 00:05:10.639221  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:10.639223  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:10.639899  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","resourceVersion":"378","creationTimestamp":"2021-08-13T00:04:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813000359-820289","minikube.k8s.io/updated_at":"2021_08_13T00_04_54_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6557 chars]
	I0813 00:05:10.640205  826514 pod_ready.go:92] pod "kube-controller-manager-multinode-20210813000359-820289" in "kube-system" namespace has status "Ready":"True"
	I0813 00:05:10.640220  826514 pod_ready.go:81] duration metric: took 26.505997ms waiting for pod "kube-controller-manager-multinode-20210813000359-820289" in "kube-system" namespace to be "Ready" ...
	I0813 00:05:10.640231  826514 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tvtvh" in "kube-system" namespace to be "Ready" ...
	I0813 00:05:10.640279  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tvtvh
	I0813 00:05:10.640289  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:10.640296  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:10.640302  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:10.642423  826514 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:05:10.642434  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:10.642440  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:10 GMT
	I0813 00:05:10.642445  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:10.642450  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:10.642454  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:10.642459  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:10.642597  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tvtvh","generateName":"kube-proxy-","namespace":"kube-system","uid":"7b54ef6d-43f6-4dc1-b3be-c5fb1b57a108","resourceVersion":"476","creationTimestamp":"2021-08-13T00:05:06Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e61f546c-a1b1-4412-af08-7c2ebe78d772","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e61f546c-a1b1-4412-af08-7c2ebe78d772\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 5760 chars]
	I0813 00:05:10.642950  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289
	I0813 00:05:10.642966  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:10.642973  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:10.642979  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:10.647579  826514 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0813 00:05:10.647591  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:10.647595  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:10.647598  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:10.647601  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:10.647604  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:10 GMT
	I0813 00:05:10.647607  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:10.648091  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","resourceVersion":"378","creationTimestamp":"2021-08-13T00:04:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813000359-820289","minikube.k8s.io/updated_at":"2021_08_13T00_04_54_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6557 chars]
	I0813 00:05:10.648364  826514 pod_ready.go:92] pod "kube-proxy-tvtvh" in "kube-system" namespace has status "Ready":"True"
	I0813 00:05:10.648382  826514 pod_ready.go:81] duration metric: took 8.142999ms waiting for pod "kube-proxy-tvtvh" in "kube-system" namespace to be "Ready" ...
	I0813 00:05:10.648392  826514 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-20210813000359-820289" in "kube-system" namespace to be "Ready" ...
	I0813 00:05:10.793758  826514 request.go:600] Waited for 145.306191ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210813000359-820289
	I0813 00:05:10.793828  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210813000359-820289
	I0813 00:05:10.793837  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:10.793842  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:10.793852  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:10.796203  826514 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:05:10.796224  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:10.796231  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:10.796235  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:10 GMT
	I0813 00:05:10.796240  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:10.796244  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:10.796248  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:10.796599  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20210813000359-820289","namespace":"kube-system","uid":"f92e79ae-a806-4356-8c4f-e58f5355dac5","resourceVersion":"328","creationTimestamp":"2021-08-13T00:04:59Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"113dba97bad3e83d4c789adae2059392","kubernetes.io/config.mirror":"113dba97bad3e83d4c789adae2059392","kubernetes.io/config.seen":"2021-08-13T00:04:59.185608489Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:
kubernetes.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:la [truncated 4543 chars]
	I0813 00:05:10.993333  826514 request.go:600] Waited for 196.357622ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289
	I0813 00:05:10.993408  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289
	I0813 00:05:10.993417  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:10.993423  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:10.993427  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:10.996459  826514 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 00:05:10.996490  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:10.996497  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:10.996502  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:10.996510  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:10.996514  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:10 GMT
	I0813 00:05:10.996519  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:10.996913  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","resourceVersion":"378","creationTimestamp":"2021-08-13T00:04:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813000359-820289","minikube.k8s.io/updated_at":"2021_08_13T00_04_54_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6557 chars]
	I0813 00:05:10.997167  826514 pod_ready.go:92] pod "kube-scheduler-multinode-20210813000359-820289" in "kube-system" namespace has status "Ready":"True"
	I0813 00:05:10.997177  826514 pod_ready.go:81] duration metric: took 348.777284ms waiting for pod "kube-scheduler-multinode-20210813000359-820289" in "kube-system" namespace to be "Ready" ...
	I0813 00:05:10.997185  826514 pod_ready.go:38] duration metric: took 3.450511964s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 00:05:10.997211  826514 api_server.go:50] waiting for apiserver process to appear ...
	I0813 00:05:10.997260  826514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 00:05:11.007956  826514 command_runner.go:124] > 2627
	I0813 00:05:11.008393  826514 api_server.go:70] duration metric: took 3.806217955s to wait for apiserver process to appear ...
	I0813 00:05:11.008410  826514 api_server.go:86] waiting for apiserver healthz status ...
	I0813 00:05:11.008423  826514 api_server.go:239] Checking apiserver healthz at https://192.168.39.22:8443/healthz ...
	I0813 00:05:11.014574  826514 api_server.go:265] https://192.168.39.22:8443/healthz returned 200:
	ok
	I0813 00:05:11.014650  826514 round_trippers.go:432] GET https://192.168.39.22:8443/version?timeout=32s
	I0813 00:05:11.014661  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:11.014668  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:11.014675  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:11.015934  826514 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:05:11.015948  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:11.015952  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:11.015955  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:11.015958  826514 round_trippers.go:463]     Content-Length: 263
	I0813 00:05:11.015961  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:11 GMT
	I0813 00:05:11.015964  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:11.015967  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:11.015984  826514 request.go:1123] Response Body: {
	  "major": "1",
	  "minor": "21",
	  "gitVersion": "v1.21.3",
	  "gitCommit": "ca643a4d1f7bfe34773c74f79527be4afd95bf39",
	  "gitTreeState": "clean",
	  "buildDate": "2021-07-15T20:59:07Z",
	  "goVersion": "go1.16.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0813 00:05:11.016067  826514 api_server.go:139] control plane version: v1.21.3
	I0813 00:05:11.016081  826514 api_server.go:129] duration metric: took 7.664909ms to wait for apiserver health ...
	I0813 00:05:11.016089  826514 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 00:05:11.193415  826514 request.go:600] Waited for 177.232272ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods
	I0813 00:05:11.193474  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods
	I0813 00:05:11.193479  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:11.193484  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:11.193489  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:11.198553  826514 round_trippers.go:457] Response Status: 200 OK in 5 milliseconds
	I0813 00:05:11.198573  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:11.198578  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:11.198583  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:11.198588  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:11 GMT
	I0813 00:05:11.198591  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:11.198595  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:11.199096  826514 request.go:1123] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"493"},"items":[{"metadata":{"name":"coredns-558bd4d5db-sstrb","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"16f6c77d-26a2-47e7-9c19-74736961cc13","resourceVersion":"480","creationTimestamp":"2021-08-13T00:05:06Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"65fbbea7-e9b5-486b-a329-69ff09468fc0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65fbbea7-e9b5-486b-a329-69ff09468fc0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller"
:{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k: [truncated 52891 chars]
	I0813 00:05:11.200295  826514 system_pods.go:59] 8 kube-system pods found
	I0813 00:05:11.200361  826514 system_pods.go:61] "coredns-558bd4d5db-sstrb" [16f6c77d-26a2-47e7-9c19-74736961cc13] Running
	I0813 00:05:11.200373  826514 system_pods.go:61] "etcd-multinode-20210813000359-820289" [2d8ff24a-3267-4d8b-a528-3da3d3b70e54] Running
	I0813 00:05:11.200383  826514 system_pods.go:61] "kindnet-rzxjz" [650bf88e-f784-45f9-8943-257e984acedb] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0813 00:05:11.200388  826514 system_pods.go:61] "kube-apiserver-multinode-20210813000359-820289" [b5954b4a-9e51-488b-a0fa-cacb7de86621] Running
	I0813 00:05:11.200395  826514 system_pods.go:61] "kube-controller-manager-multinode-20210813000359-820289" [f25a529b-df04-44a7-aa11-5f04f8acaaf9] Running
	I0813 00:05:11.200399  826514 system_pods.go:61] "kube-proxy-tvtvh" [7b54ef6d-43f6-4dc1-b3be-c5fb1b57a108] Running
	I0813 00:05:11.200403  826514 system_pods.go:61] "kube-scheduler-multinode-20210813000359-820289" [f92e79ae-a806-4356-8c4f-e58f5355dac5] Running
	I0813 00:05:11.200408  826514 system_pods.go:61] "storage-provisioner" [9999a063-d32c-4253-8af3-7c28fdc3c692] Running
	I0813 00:05:11.200413  826514 system_pods.go:74] duration metric: took 184.320486ms to wait for pod list to return data ...
	I0813 00:05:11.200422  826514 default_sa.go:34] waiting for default service account to be created ...
	I0813 00:05:11.393852  826514 request.go:600] Waited for 193.35207ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/default/serviceaccounts
	I0813 00:05:11.393908  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/default/serviceaccounts
	I0813 00:05:11.393917  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:11.393923  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:11.393927  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:11.397481  826514 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 00:05:11.397500  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:11.397506  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:11.397512  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:11.397516  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:11.397521  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:11.397524  826514 round_trippers.go:463]     Content-Length: 304
	I0813 00:05:11.397527  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:11 GMT
	I0813 00:05:11.397548  826514 request.go:1123] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"493"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"9a2cb0aa-37cd-4fff-a946-5c4f2781df23","resourceVersion":"392","creationTimestamp":"2021-08-13T00:05:06Z"},"secrets":[{"name":"default-token-hxs74"}]}]}
	I0813 00:05:11.398146  826514 default_sa.go:45] found service account: "default"
	I0813 00:05:11.398162  826514 default_sa.go:55] duration metric: took 197.736144ms for default service account to be created ...
	I0813 00:05:11.398170  826514 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 00:05:11.593599  826514 request.go:600] Waited for 195.355058ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods
	I0813 00:05:11.593670  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods
	I0813 00:05:11.593679  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:11.593689  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:11.593695  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:11.598564  826514 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0813 00:05:11.598592  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:11.598599  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:11.598603  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:11.598608  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:11 GMT
	I0813 00:05:11.598612  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:11.598621  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:11.599626  826514 request.go:1123] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"493"},"items":[{"metadata":{"name":"coredns-558bd4d5db-sstrb","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"16f6c77d-26a2-47e7-9c19-74736961cc13","resourceVersion":"480","creationTimestamp":"2021-08-13T00:05:06Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"65fbbea7-e9b5-486b-a329-69ff09468fc0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65fbbea7-e9b5-486b-a329-69ff09468fc0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller"
:{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k: [truncated 52891 chars]
	I0813 00:05:11.600873  826514 system_pods.go:86] 8 kube-system pods found
	I0813 00:05:11.600896  826514 system_pods.go:89] "coredns-558bd4d5db-sstrb" [16f6c77d-26a2-47e7-9c19-74736961cc13] Running
	I0813 00:05:11.600903  826514 system_pods.go:89] "etcd-multinode-20210813000359-820289" [2d8ff24a-3267-4d8b-a528-3da3d3b70e54] Running
	I0813 00:05:11.600909  826514 system_pods.go:89] "kindnet-rzxjz" [650bf88e-f784-45f9-8943-257e984acedb] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0813 00:05:11.600916  826514 system_pods.go:89] "kube-apiserver-multinode-20210813000359-820289" [b5954b4a-9e51-488b-a0fa-cacb7de86621] Running
	I0813 00:05:11.600923  826514 system_pods.go:89] "kube-controller-manager-multinode-20210813000359-820289" [f25a529b-df04-44a7-aa11-5f04f8acaaf9] Running
	I0813 00:05:11.600927  826514 system_pods.go:89] "kube-proxy-tvtvh" [7b54ef6d-43f6-4dc1-b3be-c5fb1b57a108] Running
	I0813 00:05:11.600931  826514 system_pods.go:89] "kube-scheduler-multinode-20210813000359-820289" [f92e79ae-a806-4356-8c4f-e58f5355dac5] Running
	I0813 00:05:11.600937  826514 system_pods.go:89] "storage-provisioner" [9999a063-d32c-4253-8af3-7c28fdc3c692] Running
	I0813 00:05:11.600943  826514 system_pods.go:126] duration metric: took 202.769344ms to wait for k8s-apps to be running ...
	I0813 00:05:11.600950  826514 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 00:05:11.600998  826514 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 00:05:11.617083  826514 system_svc.go:56] duration metric: took 16.125263ms WaitForService to wait for kubelet.
	I0813 00:05:11.617103  826514 kubeadm.go:547] duration metric: took 4.414932619s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 00:05:11.617134  826514 node_conditions.go:102] verifying NodePressure condition ...
	I0813 00:05:11.793539  826514 request.go:600] Waited for 176.313476ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes
	I0813 00:05:11.793597  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes
	I0813 00:05:11.793605  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:11.793613  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:11.793623  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:11.796474  826514 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:05:11.796490  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:11.796496  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:11.796501  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:11 GMT
	I0813 00:05:11.796505  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:11.796510  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:11.796514  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:11.796896  826514 request.go:1123] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"493"},"items":[{"metadata":{"name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","resourceVersion":"378","creationTimestamp":"2021-08-13T00:04:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813000359-820289","minikube.k8s.io/updated_at":"2021_08_13T00_04_54_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-mana
ged-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","opera [truncated 6610 chars]
	I0813 00:05:11.797993  826514 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 00:05:11.798022  826514 node_conditions.go:123] node cpu capacity is 2
	I0813 00:05:11.798040  826514 node_conditions.go:105] duration metric: took 180.900531ms to run NodePressure ...
	I0813 00:05:11.798052  826514 start.go:231] waiting for startup goroutines ...
	I0813 00:05:11.800681  826514 out.go:177] 
	I0813 00:05:11.800927  826514 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/config.json ...
	I0813 00:05:11.802911  826514 out.go:177] * Starting node multinode-20210813000359-820289-m02 in cluster multinode-20210813000359-820289
	I0813 00:05:11.802938  826514 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 00:05:11.802955  826514 cache.go:56] Caching tarball of preloaded images
	I0813 00:05:11.803103  826514 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0813 00:05:11.803123  826514 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on crio
	I0813 00:05:11.803196  826514 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/config.json ...
	I0813 00:05:11.803337  826514 cache.go:205] Successfully downloaded all kic artifacts
	I0813 00:05:11.803364  826514 start.go:313] acquiring machines lock for multinode-20210813000359-820289-m02: {Name:mk2d46e46728943fc604570595bb7616469b4e8e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 00:05:11.803420  826514 start.go:317] acquired machines lock for "multinode-20210813000359-820289-m02" in 42.804µs
	I0813 00:05:11.803441  826514 start.go:89] Provisioning new machine with config: &{Name:multinode-20210813000359-820289 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12122/minikube-v1.22.0-1628238775-12122.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3
ClusterName:multinode-20210813000359-820289 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.22 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.21.3 ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0} &{Name:m02 IP: Port:0 KubernetesVersion:v1.21.3 ControlPlane:false
Worker:true}
	I0813 00:05:11.803505  826514 start.go:126] createHost starting for "m02" (driver="kvm2")
	I0813 00:05:11.805398  826514 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0813 00:05:11.805479  826514 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 00:05:11.805513  826514 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 00:05:11.816567  826514 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37579
	I0813 00:05:11.817078  826514 main.go:130] libmachine: () Calling .GetVersion
	I0813 00:05:11.817592  826514 main.go:130] libmachine: Using API Version  1
	I0813 00:05:11.817620  826514 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 00:05:11.818035  826514 main.go:130] libmachine: () Calling .GetMachineName
	I0813 00:05:11.818222  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetMachineName
	I0813 00:05:11.818389  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .DriverName
	I0813 00:05:11.818602  826514 start.go:160] libmachine.API.Create for "multinode-20210813000359-820289" (driver="kvm2")
	I0813 00:05:11.818638  826514 client.go:168] LocalClient.Create starting
	I0813 00:05:11.818676  826514 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem
	I0813 00:05:11.818707  826514 main.go:130] libmachine: Decoding PEM data...
	I0813 00:05:11.818740  826514 main.go:130] libmachine: Parsing certificate...
	I0813 00:05:11.818877  826514 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem
	I0813 00:05:11.818903  826514 main.go:130] libmachine: Decoding PEM data...
	I0813 00:05:11.818921  826514 main.go:130] libmachine: Parsing certificate...
	I0813 00:05:11.819024  826514 main.go:130] libmachine: Running pre-create checks...
	I0813 00:05:11.819041  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .PreCreateCheck
	I0813 00:05:11.819211  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetConfigRaw
	I0813 00:05:11.819705  826514 main.go:130] libmachine: Creating machine...
	I0813 00:05:11.819748  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .Create
	I0813 00:05:11.819880  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Creating KVM machine...
	I0813 00:05:11.822625  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | found existing default KVM network
	I0813 00:05:11.822713  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | found existing private KVM network mk-multinode-20210813000359-820289
	I0813 00:05:11.822830  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Setting up store path in /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813000359-820289-m02 ...
	I0813 00:05:11.822860  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Building disk image from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/iso/minikube-v1.22.0-1628238775-12122.iso
	I0813 00:05:11.822922  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | I0813 00:05:11.822815  826790 common.go:108] Making disk image using store path: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	I0813 00:05:11.822997  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Downloading /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/iso/minikube-v1.22.0-1628238775-12122.iso...
	I0813 00:05:12.028514  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | I0813 00:05:12.028390  826790 common.go:115] Creating ssh key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813000359-820289-m02/id_rsa...
	I0813 00:05:12.219895  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | I0813 00:05:12.219785  826790 common.go:121] Creating raw disk image: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813000359-820289-m02/multinode-20210813000359-820289-m02.rawdisk...
	I0813 00:05:12.219936  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | Writing magic tar header
	I0813 00:05:12.219958  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | Writing SSH key tar header
	I0813 00:05:12.219975  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | I0813 00:05:12.219889  826790 common.go:135] Fixing permissions on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813000359-820289-m02 ...
	I0813 00:05:12.220000  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813000359-820289-m02
	I0813 00:05:12.220039  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813000359-820289-m02 (perms=drwx------)
	I0813 00:05:12.220064  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines
	I0813 00:05:12.220084  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines (perms=drwxr-xr-x)
	I0813 00:05:12.220108  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube (perms=drwxr-xr-x)
	I0813 00:05:12.220126  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b (perms=drwxr-xr-x)
	I0813 00:05:12.220147  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	I0813 00:05:12.220163  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxr-xr-x)
	I0813 00:05:12.220176  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0813 00:05:12.220185  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Creating domain...
	I0813 00:05:12.220211  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b
	I0813 00:05:12.220228  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0813 00:05:12.220241  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | Checking permissions on dir: /home/jenkins
	I0813 00:05:12.220254  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | Checking permissions on dir: /home
	I0813 00:05:12.220267  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | Skipping /home - not owner
	I0813 00:05:12.245446  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:ea:65:50 in network default
	I0813 00:05:12.245932  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Ensuring networks are active...
	I0813 00:05:12.245952  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:12.247884  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Ensuring network default is active
	I0813 00:05:12.248149  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Ensuring network mk-multinode-20210813000359-820289 is active
	I0813 00:05:12.248485  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Getting domain xml...
	I0813 00:05:12.250243  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Creating domain...
	I0813 00:05:12.639993  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Waiting to get IP...
	I0813 00:05:12.641047  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:12.641529  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | unable to find current IP address of domain multinode-20210813000359-820289-m02 in network mk-multinode-20210813000359-820289
	I0813 00:05:12.641569  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | I0813 00:05:12.641507  826790 retry.go:31] will retry after 263.082536ms: waiting for machine to come up
	I0813 00:05:12.905638  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:12.906193  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | unable to find current IP address of domain multinode-20210813000359-820289-m02 in network mk-multinode-20210813000359-820289
	I0813 00:05:12.906224  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | I0813 00:05:12.906146  826790 retry.go:31] will retry after 381.329545ms: waiting for machine to come up
	I0813 00:05:13.288610  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:13.289162  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | unable to find current IP address of domain multinode-20210813000359-820289-m02 in network mk-multinode-20210813000359-820289
	I0813 00:05:13.289189  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | I0813 00:05:13.289101  826790 retry.go:31] will retry after 422.765636ms: waiting for machine to come up
	I0813 00:05:13.713583  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:13.714019  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | unable to find current IP address of domain multinode-20210813000359-820289-m02 in network mk-multinode-20210813000359-820289
	I0813 00:05:13.714055  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | I0813 00:05:13.713959  826790 retry.go:31] will retry after 473.074753ms: waiting for machine to come up
	I0813 00:05:14.188437  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:14.188964  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | unable to find current IP address of domain multinode-20210813000359-820289-m02 in network mk-multinode-20210813000359-820289
	I0813 00:05:14.188990  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | I0813 00:05:14.188917  826790 retry.go:31] will retry after 587.352751ms: waiting for machine to come up
	I0813 00:05:14.777734  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:14.778138  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | unable to find current IP address of domain multinode-20210813000359-820289-m02 in network mk-multinode-20210813000359-820289
	I0813 00:05:14.778162  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | I0813 00:05:14.778084  826790 retry.go:31] will retry after 834.206799ms: waiting for machine to come up
	I0813 00:05:15.614030  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:15.614475  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | unable to find current IP address of domain multinode-20210813000359-820289-m02 in network mk-multinode-20210813000359-820289
	I0813 00:05:15.614520  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | I0813 00:05:15.614389  826790 retry.go:31] will retry after 746.553905ms: waiting for machine to come up
	I0813 00:05:16.362340  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:16.362838  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | unable to find current IP address of domain multinode-20210813000359-820289-m02 in network mk-multinode-20210813000359-820289
	I0813 00:05:16.362872  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | I0813 00:05:16.362798  826790 retry.go:31] will retry after 987.362415ms: waiting for machine to come up
	I0813 00:05:17.351370  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:17.351824  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | unable to find current IP address of domain multinode-20210813000359-820289-m02 in network mk-multinode-20210813000359-820289
	I0813 00:05:17.351854  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | I0813 00:05:17.351780  826790 retry.go:31] will retry after 1.189835008s: waiting for machine to come up
	I0813 00:05:18.543064  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:18.543475  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | unable to find current IP address of domain multinode-20210813000359-820289-m02 in network mk-multinode-20210813000359-820289
	I0813 00:05:18.543506  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | I0813 00:05:18.543430  826790 retry.go:31] will retry after 1.677229867s: waiting for machine to come up
	I0813 00:05:20.223263  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:20.223767  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | unable to find current IP address of domain multinode-20210813000359-820289-m02 in network mk-multinode-20210813000359-820289
	I0813 00:05:20.223801  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | I0813 00:05:20.223687  826790 retry.go:31] will retry after 2.346016261s: waiting for machine to come up
	I0813 00:05:22.571841  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:22.572497  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | unable to find current IP address of domain multinode-20210813000359-820289-m02 in network mk-multinode-20210813000359-820289
	I0813 00:05:22.572531  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | I0813 00:05:22.572432  826790 retry.go:31] will retry after 3.36678925s: waiting for machine to come up
	I0813 00:05:25.942836  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:25.943324  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Found IP for machine: 192.168.39.152
	I0813 00:05:25.943354  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has current primary IP address 192.168.39.152 and MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:25.943370  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Reserving static IP address...
	I0813 00:05:25.943744  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | unable to find host DHCP lease matching {name: "multinode-20210813000359-820289-m02", mac: "52:54:00:9e:b0:8d", ip: "192.168.39.152"} in network mk-multinode-20210813000359-820289
	I0813 00:05:25.990157  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | Getting to WaitForSSH function...
	I0813 00:05:25.990212  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Reserved static IP address: 192.168.39.152
	I0813 00:05:25.990228  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Waiting for SSH to be available...
	I0813 00:05:25.994861  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:25.995212  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b0:8d", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:05:25 +0000 UTC Type:0 Mac:52:54:00:9e:b0:8d Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9e:b0:8d}
	I0813 00:05:25.995230  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined IP address 192.168.39.152 and MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:25.995410  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | Using SSH client type: external
	I0813 00:05:25.995453  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813000359-820289-m02/id_rsa (-rw-------)
	I0813 00:05:25.995501  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.152 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813000359-820289-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0813 00:05:25.995516  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | About to run SSH command:
	I0813 00:05:25.995530  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | exit 0
	I0813 00:05:26.132228  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | SSH cmd err, output: <nil>: 
	I0813 00:05:26.132684  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) KVM machine creation complete!
	I0813 00:05:26.132762  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetConfigRaw
	I0813 00:05:26.133367  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .DriverName
	I0813 00:05:26.133578  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .DriverName
	I0813 00:05:26.133781  826514 main.go:130] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0813 00:05:26.133798  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetState
	I0813 00:05:26.136537  826514 main.go:130] libmachine: Detecting operating system of created instance...
	I0813 00:05:26.136552  826514 main.go:130] libmachine: Waiting for SSH to be available...
	I0813 00:05:26.136558  826514 main.go:130] libmachine: Getting to WaitForSSH function...
	I0813 00:05:26.136567  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHHostname
	I0813 00:05:26.140967  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:26.141279  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b0:8d", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:05:25 +0000 UTC Type:0 Mac:52:54:00:9e:b0:8d Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:multinode-20210813000359-820289-m02 Clientid:01:52:54:00:9e:b0:8d}
	I0813 00:05:26.141309  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined IP address 192.168.39.152 and MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:26.141402  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHPort
	I0813 00:05:26.141580  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHKeyPath
	I0813 00:05:26.141726  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHKeyPath
	I0813 00:05:26.141870  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHUsername
	I0813 00:05:26.142018  826514 main.go:130] libmachine: Using SSH client type: native
	I0813 00:05:26.142177  826514 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0813 00:05:26.142190  826514 main.go:130] libmachine: About to run SSH command:
	exit 0
	I0813 00:05:26.270575  826514 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 00:05:26.270605  826514 main.go:130] libmachine: Detecting the provisioner...
	I0813 00:05:26.270616  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHHostname
	I0813 00:05:26.275878  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:26.276274  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b0:8d", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:05:25 +0000 UTC Type:0 Mac:52:54:00:9e:b0:8d Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:multinode-20210813000359-820289-m02 Clientid:01:52:54:00:9e:b0:8d}
	I0813 00:05:26.276301  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined IP address 192.168.39.152 and MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:26.276458  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHPort
	I0813 00:05:26.276709  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHKeyPath
	I0813 00:05:26.276914  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHKeyPath
	I0813 00:05:26.277031  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHUsername
	I0813 00:05:26.277202  826514 main.go:130] libmachine: Using SSH client type: native
	I0813 00:05:26.277334  826514 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0813 00:05:26.277345  826514 main.go:130] libmachine: About to run SSH command:
	cat /etc/os-release
	I0813 00:05:26.404569  826514 main.go:130] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2020.02.12
	ID=buildroot
	VERSION_ID=2020.02.12
	PRETTY_NAME="Buildroot 2020.02.12"
	
	I0813 00:05:26.404628  826514 main.go:130] libmachine: found compatible host: buildroot
	I0813 00:05:26.404638  826514 main.go:130] libmachine: Provisioning with buildroot...
	I0813 00:05:26.404661  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetMachineName
	I0813 00:05:26.404881  826514 buildroot.go:166] provisioning hostname "multinode-20210813000359-820289-m02"
	I0813 00:05:26.404909  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetMachineName
	I0813 00:05:26.405072  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHHostname
	I0813 00:05:26.409749  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:26.410065  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b0:8d", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:05:25 +0000 UTC Type:0 Mac:52:54:00:9e:b0:8d Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:multinode-20210813000359-820289-m02 Clientid:01:52:54:00:9e:b0:8d}
	I0813 00:05:26.410089  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined IP address 192.168.39.152 and MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:26.410241  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHPort
	I0813 00:05:26.410392  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHKeyPath
	I0813 00:05:26.410567  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHKeyPath
	I0813 00:05:26.410713  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHUsername
	I0813 00:05:26.410897  826514 main.go:130] libmachine: Using SSH client type: native
	I0813 00:05:26.411067  826514 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0813 00:05:26.411085  826514 main.go:130] libmachine: About to run SSH command:
	sudo hostname multinode-20210813000359-820289-m02 && echo "multinode-20210813000359-820289-m02" | sudo tee /etc/hostname
	I0813 00:05:26.548864  826514 main.go:130] libmachine: SSH cmd err, output: <nil>: multinode-20210813000359-820289-m02
	
	I0813 00:05:26.548897  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHHostname
	I0813 00:05:26.553708  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:26.553993  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b0:8d", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:05:25 +0000 UTC Type:0 Mac:52:54:00:9e:b0:8d Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:multinode-20210813000359-820289-m02 Clientid:01:52:54:00:9e:b0:8d}
	I0813 00:05:26.554027  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined IP address 192.168.39.152 and MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:26.554150  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHPort
	I0813 00:05:26.554329  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHKeyPath
	I0813 00:05:26.554483  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHKeyPath
	I0813 00:05:26.554647  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHUsername
	I0813 00:05:26.554817  826514 main.go:130] libmachine: Using SSH client type: native
	I0813 00:05:26.554988  826514 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0813 00:05:26.555019  826514 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-20210813000359-820289-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-20210813000359-820289-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-20210813000359-820289-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 00:05:26.689619  826514 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 00:05:26.689648  826514 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem ServerCertRemotePath
:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube}
	I0813 00:05:26.689666  826514 buildroot.go:174] setting up certificates
	I0813 00:05:26.689674  826514 provision.go:83] configureAuth start
	I0813 00:05:26.689685  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetMachineName
	I0813 00:05:26.689952  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetIP
	I0813 00:05:26.695294  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:26.695641  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b0:8d", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:05:25 +0000 UTC Type:0 Mac:52:54:00:9e:b0:8d Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:multinode-20210813000359-820289-m02 Clientid:01:52:54:00:9e:b0:8d}
	I0813 00:05:26.695674  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined IP address 192.168.39.152 and MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:26.695785  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHHostname
	I0813 00:05:26.700088  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:26.700416  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b0:8d", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:05:25 +0000 UTC Type:0 Mac:52:54:00:9e:b0:8d Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:multinode-20210813000359-820289-m02 Clientid:01:52:54:00:9e:b0:8d}
	I0813 00:05:26.700450  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined IP address 192.168.39.152 and MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:26.700522  826514 provision.go:137] copyHostCerts
	I0813 00:05:26.700558  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem
	I0813 00:05:26.700595  826514 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem, removing ...
	I0813 00:05:26.700618  826514 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem
	I0813 00:05:26.700687  826514 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem (1123 bytes)
	I0813 00:05:26.700764  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem
	I0813 00:05:26.700790  826514 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem, removing ...
	I0813 00:05:26.700799  826514 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem
	I0813 00:05:26.700831  826514 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem (1679 bytes)
	I0813 00:05:26.700879  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem
	I0813 00:05:26.700901  826514 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem, removing ...
	I0813 00:05:26.700910  826514 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem
	I0813 00:05:26.700933  826514 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem (1078 bytes)
	I0813 00:05:26.700984  826514 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem org=jenkins.multinode-20210813000359-820289-m02 san=[192.168.39.152 192.168.39.152 localhost 127.0.0.1 minikube multinode-20210813000359-820289-m02]
	I0813 00:05:26.860935  826514 provision.go:171] copyRemoteCerts
	I0813 00:05:26.860988  826514 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 00:05:26.861018  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHHostname
	I0813 00:05:26.865741  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:26.866063  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b0:8d", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:05:25 +0000 UTC Type:0 Mac:52:54:00:9e:b0:8d Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:multinode-20210813000359-820289-m02 Clientid:01:52:54:00:9e:b0:8d}
	I0813 00:05:26.866097  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined IP address 192.168.39.152 and MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:26.866218  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHPort
	I0813 00:05:26.866376  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHKeyPath
	I0813 00:05:26.866534  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHUsername
	I0813 00:05:26.866680  826514 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813000359-820289-m02/id_rsa Username:docker}
	I0813 00:05:26.959094  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0813 00:05:26.959166  826514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 00:05:26.975755  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0813 00:05:26.975809  826514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes)
	I0813 00:05:26.991937  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0813 00:05:26.991981  826514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0813 00:05:27.008954  826514 provision.go:86] duration metric: configureAuth took 319.268918ms
	I0813 00:05:27.008982  826514 buildroot.go:189] setting minikube options for container-runtime
	I0813 00:05:27.009222  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHHostname
	I0813 00:05:27.014503  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:27.014798  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b0:8d", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:05:25 +0000 UTC Type:0 Mac:52:54:00:9e:b0:8d Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:multinode-20210813000359-820289-m02 Clientid:01:52:54:00:9e:b0:8d}
	I0813 00:05:27.014832  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined IP address 192.168.39.152 and MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:27.014966  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHPort
	I0813 00:05:27.015162  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHKeyPath
	I0813 00:05:27.015325  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHKeyPath
	I0813 00:05:27.015448  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHUsername
	I0813 00:05:27.015578  826514 main.go:130] libmachine: Using SSH client type: native
	I0813 00:05:27.015767  826514 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0813 00:05:27.015787  826514 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 00:05:27.632016  826514 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 00:05:27.632050  826514 main.go:130] libmachine: Checking connection to Docker...
	I0813 00:05:27.632063  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetURL
	I0813 00:05:27.634807  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | Using libvirt version 3000000
	I0813 00:05:27.639902  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:27.640269  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b0:8d", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:05:25 +0000 UTC Type:0 Mac:52:54:00:9e:b0:8d Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:multinode-20210813000359-820289-m02 Clientid:01:52:54:00:9e:b0:8d}
	I0813 00:05:27.640296  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined IP address 192.168.39.152 and MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:27.640439  826514 main.go:130] libmachine: Docker is up and running!
	I0813 00:05:27.640454  826514 main.go:130] libmachine: Reticulating splines...
	I0813 00:05:27.640462  826514 client.go:171] LocalClient.Create took 15.821812553s
	I0813 00:05:27.640485  826514 start.go:168] duration metric: libmachine.API.Create for "multinode-20210813000359-820289" took 15.821884504s
	I0813 00:05:27.640498  826514 start.go:267] post-start starting for "multinode-20210813000359-820289-m02" (driver="kvm2")
	I0813 00:05:27.640507  826514 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 00:05:27.640534  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .DriverName
	I0813 00:05:27.640772  826514 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 00:05:27.640800  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHHostname
	I0813 00:05:27.644885  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:27.645180  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b0:8d", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:05:25 +0000 UTC Type:0 Mac:52:54:00:9e:b0:8d Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:multinode-20210813000359-820289-m02 Clientid:01:52:54:00:9e:b0:8d}
	I0813 00:05:27.645207  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined IP address 192.168.39.152 and MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:27.645315  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHPort
	I0813 00:05:27.645490  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHKeyPath
	I0813 00:05:27.645668  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHUsername
	I0813 00:05:27.645794  826514 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813000359-820289-m02/id_rsa Username:docker}
	I0813 00:05:27.738784  826514 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 00:05:27.743460  826514 command_runner.go:124] > NAME=Buildroot
	I0813 00:05:27.743485  826514 command_runner.go:124] > VERSION=2020.02.12
	I0813 00:05:27.743491  826514 command_runner.go:124] > ID=buildroot
	I0813 00:05:27.743502  826514 command_runner.go:124] > VERSION_ID=2020.02.12
	I0813 00:05:27.743509  826514 command_runner.go:124] > PRETTY_NAME="Buildroot 2020.02.12"
	I0813 00:05:27.743548  826514 info.go:137] Remote host: Buildroot 2020.02.12
	I0813 00:05:27.743563  826514 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/addons for local assets ...
	I0813 00:05:27.743630  826514 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files for local assets ...
	I0813 00:05:27.743770  826514 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/8202892.pem -> 8202892.pem in /etc/ssl/certs
	I0813 00:05:27.743783  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/8202892.pem -> /etc/ssl/certs/8202892.pem
	I0813 00:05:27.743910  826514 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 00:05:27.750704  826514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/8202892.pem --> /etc/ssl/certs/8202892.pem (1708 bytes)
	I0813 00:05:27.767250  826514 start.go:270] post-start completed in 126.737681ms
	I0813 00:05:27.767299  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetConfigRaw
	I0813 00:05:27.767949  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetIP
	I0813 00:05:27.772859  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:27.773128  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b0:8d", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:05:25 +0000 UTC Type:0 Mac:52:54:00:9e:b0:8d Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:multinode-20210813000359-820289-m02 Clientid:01:52:54:00:9e:b0:8d}
	I0813 00:05:27.773161  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined IP address 192.168.39.152 and MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:27.773366  826514 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/config.json ...
	I0813 00:05:27.773544  826514 start.go:129] duration metric: createHost completed in 15.970030387s
	I0813 00:05:27.773566  826514 start.go:80] releasing machines lock for "multinode-20210813000359-820289-m02", held for 15.970128109s
	I0813 00:05:27.773612  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .DriverName
	I0813 00:05:27.773797  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetIP
	I0813 00:05:27.777787  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:27.778095  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b0:8d", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:05:25 +0000 UTC Type:0 Mac:52:54:00:9e:b0:8d Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:multinode-20210813000359-820289-m02 Clientid:01:52:54:00:9e:b0:8d}
	I0813 00:05:27.778126  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined IP address 192.168.39.152 and MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:27.780843  826514 out.go:177] * Found network options:
	I0813 00:05:27.782254  826514 out.go:177]   - NO_PROXY=192.168.39.22
	W0813 00:05:27.782294  826514 proxy.go:118] fail to check proxy env: Error ip not in block
	I0813 00:05:27.782341  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .DriverName
	I0813 00:05:27.782517  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .DriverName
	I0813 00:05:27.782981  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .DriverName
	W0813 00:05:27.783164  826514 proxy.go:118] fail to check proxy env: Error ip not in block
	I0813 00:05:27.783208  826514 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 00:05:27.783282  826514 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 00:05:27.783330  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHHostname
	I0813 00:05:27.783282  826514 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 00:05:27.783388  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHHostname
	I0813 00:05:27.790002  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:27.790035  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:27.790337  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b0:8d", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:05:25 +0000 UTC Type:0 Mac:52:54:00:9e:b0:8d Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:multinode-20210813000359-820289-m02 Clientid:01:52:54:00:9e:b0:8d}
	I0813 00:05:27.790373  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined IP address 192.168.39.152 and MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:27.790440  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b0:8d", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:05:25 +0000 UTC Type:0 Mac:52:54:00:9e:b0:8d Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:multinode-20210813000359-820289-m02 Clientid:01:52:54:00:9e:b0:8d}
	I0813 00:05:27.790469  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined IP address 192.168.39.152 and MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:27.790512  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHPort
	I0813 00:05:27.790640  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHPort
	I0813 00:05:27.790725  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHKeyPath
	I0813 00:05:27.790790  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHKeyPath
	I0813 00:05:27.790853  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHUsername
	I0813 00:05:27.790905  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHUsername
	I0813 00:05:27.790955  826514 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813000359-820289-m02/id_rsa Username:docker}
	I0813 00:05:27.790991  826514 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813000359-820289-m02/id_rsa Username:docker}
	I0813 00:05:31.875217  826514 command_runner.go:124] > {
	I0813 00:05:31.875247  826514 command_runner.go:124] >   "images": [
	I0813 00:05:31.875254  826514 command_runner.go:124] >   ]
	I0813 00:05:31.875259  826514 command_runner.go:124] > }
	I0813 00:05:31.876323  826514 command_runner.go:124] > <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
	I0813 00:05:31.876343  826514 command_runner.go:124] > <TITLE>302 Moved</TITLE></HEAD><BODY>
	I0813 00:05:31.876349  826514 command_runner.go:124] > <H1>302 Moved</H1>
	I0813 00:05:31.876354  826514 command_runner.go:124] > The document has moved
	I0813 00:05:31.876364  826514 command_runner.go:124] > <A HREF="https://cloud.google.com/container-registry/">here</A>.
	I0813 00:05:31.876369  826514 command_runner.go:124] > </BODY></HTML>
	I0813 00:05:31.876405  826514 ssh_runner.go:189] Completed: curl -sS -m 2 https://k8s.gcr.io/: (4.093100139s)
	I0813 00:05:31.876506  826514 command_runner.go:124] ! time="2021-08-13T00:05:27Z" level=warning msg="image connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	I0813 00:05:31.876524  826514 command_runner.go:124] ! time="2021-08-13T00:05:29Z" level=error msg="connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	I0813 00:05:31.876544  826514 command_runner.go:124] ! time="2021-08-13T00:05:31Z" level=error msg="connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	I0813 00:05:31.876579  826514 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.093195447s)
	I0813 00:05:31.876616  826514 crio.go:420] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.21.3". assuming images are not preloaded.
	I0813 00:05:31.876669  826514 ssh_runner.go:149] Run: which lz4
	I0813 00:05:31.880822  826514 command_runner.go:124] > /bin/lz4
	I0813 00:05:31.881074  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0813 00:05:31.881166  826514 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0813 00:05:31.885591  826514 command_runner.go:124] ! stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0813 00:05:31.885629  826514 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0813 00:05:31.885658  826514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (576184326 bytes)
	I0813 00:05:34.202842  826514 crio.go:362] Took 2.321704 seconds to copy over tarball
	I0813 00:05:34.202919  826514 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0813 00:05:39.218531  826514 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (5.015578535s)
	I0813 00:05:39.218565  826514 crio.go:369] Took 5.015691 seconds t extract the tarball
	I0813 00:05:39.218576  826514 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0813 00:05:39.260566  826514 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 00:05:39.273464  826514 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 00:05:39.283375  826514 docker.go:153] disabling docker service ...
	I0813 00:05:39.283428  826514 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 00:05:39.294353  826514 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 00:05:39.303091  826514 command_runner.go:124] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0813 00:05:39.303316  826514 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 00:05:39.312741  826514 command_runner.go:124] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0813 00:05:39.439475  826514 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 00:05:39.575584  826514 command_runner.go:124] ! Unit docker.service does not exist, proceeding anyway.
	I0813 00:05:39.575617  826514 command_runner.go:124] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0813 00:05:39.575726  826514 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 00:05:39.585566  826514 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 00:05:39.598735  826514 command_runner.go:124] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0813 00:05:39.598758  826514 command_runner.go:124] > image-endpoint: unix:///var/run/crio/crio.sock
	I0813 00:05:39.598822  826514 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0813 00:05:39.606200  826514 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0813 00:05:39.606221  826514 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0813 00:05:39.613697  826514 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 00:05:39.620204  826514 command_runner.go:124] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 00:05:39.620468  826514 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 00:05:39.620513  826514 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 00:05:39.634564  826514 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 00:05:39.641333  826514 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 00:05:39.785855  826514 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 00:05:40.054895  826514 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 00:05:40.054976  826514 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 00:05:40.059499  826514 command_runner.go:124] >   File: /var/run/crio/crio.sock
	I0813 00:05:40.059522  826514 command_runner.go:124] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0813 00:05:40.059531  826514 command_runner.go:124] > Device: 14h/20d	Inode: 29533       Links: 1
	I0813 00:05:40.059538  826514 command_runner.go:124] > Access: (0755/srwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0813 00:05:40.059543  826514 command_runner.go:124] > Access: 2021-08-13 00:05:31.821340619 +0000
	I0813 00:05:40.059551  826514 command_runner.go:124] > Modify: 2021-08-13 00:05:27.527499015 +0000
	I0813 00:05:40.059557  826514 command_runner.go:124] > Change: 2021-08-13 00:05:27.527499015 +0000
	I0813 00:05:40.059560  826514 command_runner.go:124] >  Birth: -
	I0813 00:05:40.060000  826514 start.go:417] Will wait 60s for crictl version
	I0813 00:05:40.060059  826514 ssh_runner.go:149] Run: sudo crictl version
	I0813 00:05:40.091311  826514 command_runner.go:124] > Version:  0.1.0
	I0813 00:05:40.091333  826514 command_runner.go:124] > RuntimeName:  cri-o
	I0813 00:05:40.091344  826514 command_runner.go:124] > RuntimeVersion:  1.20.2
	I0813 00:05:40.091353  826514 command_runner.go:124] > RuntimeApiVersion:  v1alpha1
	I0813 00:05:40.092257  826514 start.go:426] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.2
	RuntimeApiVersion:  v1alpha1
	I0813 00:05:40.092340  826514 ssh_runner.go:149] Run: crio --version
	I0813 00:05:40.338752  826514 command_runner.go:124] > crio version 1.20.2
	I0813 00:05:40.338779  826514 command_runner.go:124] > Version:       1.20.2
	I0813 00:05:40.338786  826514 command_runner.go:124] > GitCommit:     d5a999ad0a35d895ded554e1e18c142075501a98
	I0813 00:05:40.338791  826514 command_runner.go:124] > GitTreeState:  clean
	I0813 00:05:40.338798  826514 command_runner.go:124] > BuildDate:     2021-08-06T09:19:16Z
	I0813 00:05:40.338803  826514 command_runner.go:124] > GoVersion:     go1.13.15
	I0813 00:05:40.338809  826514 command_runner.go:124] > Compiler:      gc
	I0813 00:05:40.338817  826514 command_runner.go:124] > Platform:      linux/amd64
	I0813 00:05:40.340459  826514 command_runner.go:124] ! time="2021-08-13T00:05:40Z" level=info msg="Starting CRI-O, version: 1.20.2, git: d5a999ad0a35d895ded554e1e18c142075501a98(clean)"
	I0813 00:05:40.340545  826514 ssh_runner.go:149] Run: crio --version
	I0813 00:05:40.634279  826514 command_runner.go:124] > crio version 1.20.2
	I0813 00:05:40.634306  826514 command_runner.go:124] > Version:       1.20.2
	I0813 00:05:40.634316  826514 command_runner.go:124] > GitCommit:     d5a999ad0a35d895ded554e1e18c142075501a98
	I0813 00:05:40.634321  826514 command_runner.go:124] > GitTreeState:  clean
	I0813 00:05:40.634335  826514 command_runner.go:124] > BuildDate:     2021-08-06T09:19:16Z
	I0813 00:05:40.634342  826514 command_runner.go:124] > GoVersion:     go1.13.15
	I0813 00:05:40.634348  826514 command_runner.go:124] > Compiler:      gc
	I0813 00:05:40.634355  826514 command_runner.go:124] > Platform:      linux/amd64
	I0813 00:05:40.635480  826514 command_runner.go:124] ! time="2021-08-13T00:05:40Z" level=info msg="Starting CRI-O, version: 1.20.2, git: d5a999ad0a35d895ded554e1e18c142075501a98(clean)"
	I0813 00:05:43.433000  826514 out.go:177] * Preparing Kubernetes v1.21.3 on CRI-O 1.20.2 ...
	I0813 00:05:43.435220  826514 out.go:177]   - env NO_PROXY=192.168.39.22
	I0813 00:05:43.435265  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetIP
	I0813 00:05:43.440935  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:43.441303  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b0:8d", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:05:25 +0000 UTC Type:0 Mac:52:54:00:9e:b0:8d Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:multinode-20210813000359-820289-m02 Clientid:01:52:54:00:9e:b0:8d}
	I0813 00:05:43.441336  826514 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined IP address 192.168.39.152 and MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:05:43.441535  826514 ssh_runner.go:149] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0813 00:05:43.446316  826514 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 00:05:43.457318  826514 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289 for IP: 192.168.39.152
	I0813 00:05:43.457364  826514 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key
	I0813 00:05:43.457381  826514 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key
	I0813 00:05:43.457393  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0813 00:05:43.457408  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0813 00:05:43.457420  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0813 00:05:43.457431  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0813 00:05:43.457487  826514 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/820289.pem (1338 bytes)
	W0813 00:05:43.457527  826514 certs.go:369] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/820289_empty.pem, impossibly tiny 0 bytes
	I0813 00:05:43.457540  826514 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 00:05:43.457566  826514 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem (1078 bytes)
	I0813 00:05:43.457592  826514 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem (1123 bytes)
	I0813 00:05:43.457615  826514 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem (1679 bytes)
	I0813 00:05:43.457664  826514 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/8202892.pem (1708 bytes)
	I0813 00:05:43.457699  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0813 00:05:43.457712  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/820289.pem -> /usr/share/ca-certificates/820289.pem
	I0813 00:05:43.457723  826514 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/8202892.pem -> /usr/share/ca-certificates/8202892.pem
	I0813 00:05:43.458108  826514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 00:05:43.475819  826514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0813 00:05:43.492168  826514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 00:05:43.507680  826514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 00:05:43.524777  826514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 00:05:43.541295  826514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/820289.pem --> /usr/share/ca-certificates/820289.pem (1338 bytes)
	I0813 00:05:43.557434  826514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/8202892.pem --> /usr/share/ca-certificates/8202892.pem (1708 bytes)
	I0813 00:05:43.573549  826514 ssh_runner.go:149] Run: openssl version
	I0813 00:05:43.578939  826514 command_runner.go:124] > OpenSSL 1.1.1k  25 Mar 2021
	I0813 00:05:43.579003  826514 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8202892.pem && ln -fs /usr/share/ca-certificates/8202892.pem /etc/ssl/certs/8202892.pem"
	I0813 00:05:43.586452  826514 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8202892.pem
	I0813 00:05:43.590618  826514 command_runner.go:124] > -rw-r--r-- 1 root root 1708 Aug 12 23:59 /usr/share/ca-certificates/8202892.pem
	I0813 00:05:43.591039  826514 certs.go:416] hashing: -rw-r--r-- 1 root root 1708 Aug 12 23:59 /usr/share/ca-certificates/8202892.pem
	I0813 00:05:43.591079  826514 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8202892.pem
	I0813 00:05:43.596423  826514 command_runner.go:124] > 3ec20f2e
	I0813 00:05:43.596785  826514 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8202892.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 00:05:43.604354  826514 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 00:05:43.611690  826514 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 00:05:43.616528  826514 command_runner.go:124] > -rw-r--r-- 1 root root 1111 Aug 12 23:51 /usr/share/ca-certificates/minikubeCA.pem
	I0813 00:05:43.616845  826514 certs.go:416] hashing: -rw-r--r-- 1 root root 1111 Aug 12 23:51 /usr/share/ca-certificates/minikubeCA.pem
	I0813 00:05:43.616888  826514 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 00:05:43.622478  826514 command_runner.go:124] > b5213941
	I0813 00:05:43.622519  826514 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 00:05:43.630072  826514 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/820289.pem && ln -fs /usr/share/ca-certificates/820289.pem /etc/ssl/certs/820289.pem"
	I0813 00:05:43.638062  826514 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/820289.pem
	I0813 00:05:43.642625  826514 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Aug 12 23:59 /usr/share/ca-certificates/820289.pem
	I0813 00:05:43.642646  826514 certs.go:416] hashing: -rw-r--r-- 1 root root 1338 Aug 12 23:59 /usr/share/ca-certificates/820289.pem
	I0813 00:05:43.642677  826514 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/820289.pem
	I0813 00:05:43.648215  826514 command_runner.go:124] > 51391683
	I0813 00:05:43.648279  826514 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/820289.pem /etc/ssl/certs/51391683.0"
	I0813 00:05:43.655779  826514 ssh_runner.go:149] Run: crio config
	I0813 00:05:43.873484  826514 command_runner.go:124] ! time="2021-08-13T00:05:43Z" level=info msg="Starting CRI-O, version: 1.20.2, git: d5a999ad0a35d895ded554e1e18c142075501a98(clean)"
	I0813 00:05:43.874762  826514 command_runner.go:124] ! time="2021-08-13T00:05:43Z" level=warning msg="The 'registries' option in crio.conf(5) (referenced in \"/etc/crio/crio.conf\") has been deprecated and will be removed with CRI-O 1.21."
	I0813 00:05:43.874793  826514 command_runner.go:124] ! time="2021-08-13T00:05:43Z" level=warning msg="Please refer to containers-registries.conf(5) for configuring unqualified-search registries."
	I0813 00:05:43.876991  826514 command_runner.go:124] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0813 00:05:43.879382  826514 command_runner.go:124] > # The CRI-O configuration file specifies all of the available configuration
	I0813 00:05:43.879398  826514 command_runner.go:124] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0813 00:05:43.879405  826514 command_runner.go:124] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0813 00:05:43.879408  826514 command_runner.go:124] > #
	I0813 00:05:43.879416  826514 command_runner.go:124] > # Please refer to crio.conf(5) for details of all configuration options.
	I0813 00:05:43.879427  826514 command_runner.go:124] > # CRI-O supports partial configuration reload during runtime, which can be
	I0813 00:05:43.879440  826514 command_runner.go:124] > # done by sending SIGHUP to the running process. Currently supported options
	I0813 00:05:43.879454  826514 command_runner.go:124] > # are explicitly mentioned with: 'This option supports live configuration
	I0813 00:05:43.879464  826514 command_runner.go:124] > # reload'.
	I0813 00:05:43.879475  826514 command_runner.go:124] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0813 00:05:43.879488  826514 command_runner.go:124] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0813 00:05:43.879501  826514 command_runner.go:124] > # you want to change the system's defaults. If you want to modify storage just
	I0813 00:05:43.879513  826514 command_runner.go:124] > # for CRI-O, you can change the storage configuration options here.
	I0813 00:05:43.879521  826514 command_runner.go:124] > [crio]
	I0813 00:05:43.879531  826514 command_runner.go:124] > # Path to the "root directory". CRI-O stores all of its data, including
	I0813 00:05:43.879541  826514 command_runner.go:124] > # containers images, in this directory.
	I0813 00:05:43.879550  826514 command_runner.go:124] > #root = "/var/lib/containers/storage"
	I0813 00:05:43.879566  826514 command_runner.go:124] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0813 00:05:43.879576  826514 command_runner.go:124] > #runroot = "/var/run/containers/storage"
	I0813 00:05:43.879590  826514 command_runner.go:124] > # Storage driver used to manage the storage of images and containers. Please
	I0813 00:05:43.879602  826514 command_runner.go:124] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0813 00:05:43.879612  826514 command_runner.go:124] > #storage_driver = "overlay"
	I0813 00:05:43.879621  826514 command_runner.go:124] > # List to pass options to the storage driver. Please refer to
	I0813 00:05:43.879630  826514 command_runner.go:124] > # containers-storage.conf(5) to see all available storage options.
	I0813 00:05:43.879635  826514 command_runner.go:124] > #storage_option = [
	I0813 00:05:43.879639  826514 command_runner.go:124] > #]
	I0813 00:05:43.879646  826514 command_runner.go:124] > # The default log directory where all logs will go unless directly specified by
	I0813 00:05:43.879654  826514 command_runner.go:124] > # the kubelet. The log directory specified must be an absolute directory.
	I0813 00:05:43.879658  826514 command_runner.go:124] > log_dir = "/var/log/crio/pods"
	I0813 00:05:43.879666  826514 command_runner.go:124] > # Location for CRI-O to lay down the temporary version file.
	I0813 00:05:43.879673  826514 command_runner.go:124] > # It is used to check if crio wipe should wipe containers, which should
	I0813 00:05:43.879679  826514 command_runner.go:124] > # always happen on a node reboot
	I0813 00:05:43.879684  826514 command_runner.go:124] > version_file = "/var/run/crio/version"
	I0813 00:05:43.879694  826514 command_runner.go:124] > # Location for CRI-O to lay down the persistent version file.
	I0813 00:05:43.879700  826514 command_runner.go:124] > # It is used to check if crio wipe should wipe images, which should
	I0813 00:05:43.879721  826514 command_runner.go:124] > # only happen when CRI-O has been upgraded
	I0813 00:05:43.879730  826514 command_runner.go:124] > version_file_persist = "/var/lib/crio/version"
	I0813 00:05:43.879737  826514 command_runner.go:124] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0813 00:05:43.879743  826514 command_runner.go:124] > [crio.api]
	I0813 00:05:43.879748  826514 command_runner.go:124] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0813 00:05:43.879753  826514 command_runner.go:124] > listen = "/var/run/crio/crio.sock"
	I0813 00:05:43.879759  826514 command_runner.go:124] > # IP address on which the stream server will listen.
	I0813 00:05:43.879765  826514 command_runner.go:124] > stream_address = "127.0.0.1"
	I0813 00:05:43.879772  826514 command_runner.go:124] > # The port on which the stream server will listen. If the port is set to "0", then
	I0813 00:05:43.879778  826514 command_runner.go:124] > # CRI-O will allocate a random free port number.
	I0813 00:05:43.879782  826514 command_runner.go:124] > stream_port = "0"
	I0813 00:05:43.879787  826514 command_runner.go:124] > # Enable encrypted TLS transport of the stream server.
	I0813 00:05:43.879793  826514 command_runner.go:124] > stream_enable_tls = false
	I0813 00:05:43.879801  826514 command_runner.go:124] > # Length of time until open streams terminate due to lack of activity
	I0813 00:05:43.879807  826514 command_runner.go:124] > stream_idle_timeout = ""
	I0813 00:05:43.879814  826514 command_runner.go:124] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0813 00:05:43.879822  826514 command_runner.go:124] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0813 00:05:43.879826  826514 command_runner.go:124] > # minutes.
	I0813 00:05:43.879830  826514 command_runner.go:124] > stream_tls_cert = ""
	I0813 00:05:43.879838  826514 command_runner.go:124] > # Path to the key file used to serve the encrypted stream. This file can
	I0813 00:05:43.879844  826514 command_runner.go:124] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0813 00:05:43.879852  826514 command_runner.go:124] > stream_tls_key = ""
	I0813 00:05:43.879860  826514 command_runner.go:124] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0813 00:05:43.879870  826514 command_runner.go:124] > # communication with the encrypted stream. This file can change and CRI-O will
	I0813 00:05:43.879877  826514 command_runner.go:124] > # automatically pick up the changes within 5 minutes.
	I0813 00:05:43.879881  826514 command_runner.go:124] > stream_tls_ca = ""
	I0813 00:05:43.879891  826514 command_runner.go:124] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0813 00:05:43.879897  826514 command_runner.go:124] > grpc_max_send_msg_size = 16777216
	I0813 00:05:43.879905  826514 command_runner.go:124] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0813 00:05:43.879911  826514 command_runner.go:124] > grpc_max_recv_msg_size = 16777216
	I0813 00:05:43.879918  826514 command_runner.go:124] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0813 00:05:43.879927  826514 command_runner.go:124] > # and options for how to set up and manage the OCI runtime.
	I0813 00:05:43.879930  826514 command_runner.go:124] > [crio.runtime]
	I0813 00:05:43.879936  826514 command_runner.go:124] > # A list of ulimits to be set in containers by default, specified as
	I0813 00:05:43.879943  826514 command_runner.go:124] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0813 00:05:43.879947  826514 command_runner.go:124] > # "nofile=1024:2048"
	I0813 00:05:43.879954  826514 command_runner.go:124] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0813 00:05:43.879959  826514 command_runner.go:124] > #default_ulimits = [
	I0813 00:05:43.879963  826514 command_runner.go:124] > #]
	I0813 00:05:43.879969  826514 command_runner.go:124] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0813 00:05:43.879976  826514 command_runner.go:124] > no_pivot = false
	I0813 00:05:43.879982  826514 command_runner.go:124] > # decryption_keys_path is the path where the keys required for
	I0813 00:05:43.880040  826514 command_runner.go:124] > # image decryption are stored. This option supports live configuration reload.
	I0813 00:05:43.880050  826514 command_runner.go:124] > decryption_keys_path = "/etc/crio/keys/"
	I0813 00:05:43.880056  826514 command_runner.go:124] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0813 00:05:43.880061  826514 command_runner.go:124] > # Will be searched for using $PATH if empty.
	I0813 00:05:43.880065  826514 command_runner.go:124] > conmon = "/usr/libexec/crio/conmon"
	I0813 00:05:43.880071  826514 command_runner.go:124] > # Cgroup setting for conmon
	I0813 00:05:43.880075  826514 command_runner.go:124] > conmon_cgroup = "system.slice"
	I0813 00:05:43.880082  826514 command_runner.go:124] > # Environment variable list for the conmon process, used for passing necessary
	I0813 00:05:43.880089  826514 command_runner.go:124] > # environment variables to conmon or the runtime.
	I0813 00:05:43.880093  826514 command_runner.go:124] > conmon_env = [
	I0813 00:05:43.880100  826514 command_runner.go:124] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0813 00:05:43.880103  826514 command_runner.go:124] > ]
	I0813 00:05:43.880109  826514 command_runner.go:124] > # Additional environment variables to set for all the
	I0813 00:05:43.880117  826514 command_runner.go:124] > # containers. These are overridden if set in the
	I0813 00:05:43.880123  826514 command_runner.go:124] > # container image spec or in the container runtime configuration.
	I0813 00:05:43.880129  826514 command_runner.go:124] > default_env = [
	I0813 00:05:43.880132  826514 command_runner.go:124] > ]
	I0813 00:05:43.880139  826514 command_runner.go:124] > # If true, SELinux will be used for pod separation on the host.
	I0813 00:05:43.880145  826514 command_runner.go:124] > selinux = false
	I0813 00:05:43.880151  826514 command_runner.go:124] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0813 00:05:43.880157  826514 command_runner.go:124] > # for the runtime. If not specified, then the internal default seccomp profile
	I0813 00:05:43.880165  826514 command_runner.go:124] > # will be used. This option supports live configuration reload.
	I0813 00:05:43.880169  826514 command_runner.go:124] > seccomp_profile = ""
	I0813 00:05:43.880176  826514 command_runner.go:124] > # Changes the meaning of an empty seccomp profile. By default
	I0813 00:05:43.880182  826514 command_runner.go:124] > # (and according to CRI spec), an empty profile means unconfined.
	I0813 00:05:43.880190  826514 command_runner.go:124] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0813 00:05:43.880195  826514 command_runner.go:124] > # which might increase security.
	I0813 00:05:43.880206  826514 command_runner.go:124] > seccomp_use_default_when_empty = false
	I0813 00:05:43.880216  826514 command_runner.go:124] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0813 00:05:43.880223  826514 command_runner.go:124] > # profile name is "crio-default". This profile only takes effect if the user
	I0813 00:05:43.880231  826514 command_runner.go:124] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0813 00:05:43.880239  826514 command_runner.go:124] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0813 00:05:43.880246  826514 command_runner.go:124] > # This option supports live configuration reload.
	I0813 00:05:43.880251  826514 command_runner.go:124] > apparmor_profile = "crio-default"
	I0813 00:05:43.880258  826514 command_runner.go:124] > # Used to change irqbalance service config file path which is used for configuring
	I0813 00:05:43.880263  826514 command_runner.go:124] > # irqbalance daemon.
	I0813 00:05:43.880268  826514 command_runner.go:124] > irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0813 00:05:43.880276  826514 command_runner.go:124] > # Cgroup management implementation used for the runtime.
	I0813 00:05:43.880280  826514 command_runner.go:124] > cgroup_manager = "systemd"
	I0813 00:05:43.880286  826514 command_runner.go:124] > # Specify whether the image pull must be performed in a separate cgroup.
	I0813 00:05:43.880292  826514 command_runner.go:124] > separate_pull_cgroup = ""
	I0813 00:05:43.880299  826514 command_runner.go:124] > # List of default capabilities for containers. If it is empty or commented out,
	I0813 00:05:43.880308  826514 command_runner.go:124] > # only the capabilities defined in the containers json file by the user/kube
	I0813 00:05:43.880311  826514 command_runner.go:124] > # will be added.
	I0813 00:05:43.880317  826514 command_runner.go:124] > default_capabilities = [
	I0813 00:05:43.880321  826514 command_runner.go:124] > 	"CHOWN",
	I0813 00:05:43.880324  826514 command_runner.go:124] > 	"DAC_OVERRIDE",
	I0813 00:05:43.880329  826514 command_runner.go:124] > 	"FSETID",
	I0813 00:05:43.880332  826514 command_runner.go:124] > 	"FOWNER",
	I0813 00:05:43.880337  826514 command_runner.go:124] > 	"SETGID",
	I0813 00:05:43.880340  826514 command_runner.go:124] > 	"SETUID",
	I0813 00:05:43.880345  826514 command_runner.go:124] > 	"SETPCAP",
	I0813 00:05:43.880350  826514 command_runner.go:124] > 	"NET_BIND_SERVICE",
	I0813 00:05:43.880354  826514 command_runner.go:124] > 	"KILL",
	I0813 00:05:43.880357  826514 command_runner.go:124] > ]
	I0813 00:05:43.880364  826514 command_runner.go:124] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0813 00:05:43.880371  826514 command_runner.go:124] > # defined in the container json file by the user/kube will be added.
	I0813 00:05:43.880377  826514 command_runner.go:124] > default_sysctls = [
	I0813 00:05:43.880380  826514 command_runner.go:124] > ]
	I0813 00:05:43.880387  826514 command_runner.go:124] > # List of additional devices. specified as
	I0813 00:05:43.880395  826514 command_runner.go:124] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0813 00:05:43.880403  826514 command_runner.go:124] > #If it is empty or commented out, only the devices
	I0813 00:05:43.880409  826514 command_runner.go:124] > # defined in the container json file by the user/kube will be added.
	I0813 00:05:43.880415  826514 command_runner.go:124] > additional_devices = [
	I0813 00:05:43.880418  826514 command_runner.go:124] > ]
	I0813 00:05:43.880426  826514 command_runner.go:124] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0813 00:05:43.880434  826514 command_runner.go:124] > # directories does not exist, then CRI-O will automatically skip them.
	I0813 00:05:43.880437  826514 command_runner.go:124] > hooks_dir = [
	I0813 00:05:43.880442  826514 command_runner.go:124] > 	"/usr/share/containers/oci/hooks.d",
	I0813 00:05:43.880445  826514 command_runner.go:124] > ]
	I0813 00:05:43.880451  826514 command_runner.go:124] > # Path to the file specifying the defaults mounts for each container. The
	I0813 00:05:43.880459  826514 command_runner.go:124] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0813 00:05:43.880468  826514 command_runner.go:124] > # its default mounts from the following two files:
	I0813 00:05:43.880472  826514 command_runner.go:124] > #
	I0813 00:05:43.880479  826514 command_runner.go:124] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0813 00:05:43.880488  826514 command_runner.go:124] > #      override file, where users can either add in their own default mounts, or
	I0813 00:05:43.880493  826514 command_runner.go:124] > #      override the default mounts shipped with the package.
	I0813 00:05:43.880499  826514 command_runner.go:124] > #
	I0813 00:05:43.880505  826514 command_runner.go:124] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0813 00:05:43.880515  826514 command_runner.go:124] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0813 00:05:43.880521  826514 command_runner.go:124] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0813 00:05:43.880529  826514 command_runner.go:124] > #      only add mounts it finds in this file.
	I0813 00:05:43.880532  826514 command_runner.go:124] > #
	I0813 00:05:43.880536  826514 command_runner.go:124] > #default_mounts_file = ""
	I0813 00:05:43.880541  826514 command_runner.go:124] > # Maximum number of processes allowed in a container.
	I0813 00:05:43.880546  826514 command_runner.go:124] > pids_limit = 1024
	I0813 00:05:43.880552  826514 command_runner.go:124] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0813 00:05:43.880561  826514 command_runner.go:124] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0813 00:05:43.880568  826514 command_runner.go:124] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0813 00:05:43.880577  826514 command_runner.go:124] > # limit is never exceeded.
	I0813 00:05:43.880581  826514 command_runner.go:124] > log_size_max = -1
	I0813 00:05:43.880608  826514 command_runner.go:124] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0813 00:05:43.880619  826514 command_runner.go:124] > log_to_journald = false
	I0813 00:05:43.880631  826514 command_runner.go:124] > # Path to directory in which container exit files are written to by conmon.
	I0813 00:05:43.880641  826514 command_runner.go:124] > container_exits_dir = "/var/run/crio/exits"
	I0813 00:05:43.880650  826514 command_runner.go:124] > # Path to directory for container attach sockets.
	I0813 00:05:43.880661  826514 command_runner.go:124] > container_attach_socket_dir = "/var/run/crio"
	I0813 00:05:43.880673  826514 command_runner.go:124] > # The prefix to use for the source of the bind mounts.
	I0813 00:05:43.880683  826514 command_runner.go:124] > bind_mount_prefix = ""
	I0813 00:05:43.880693  826514 command_runner.go:124] > # If set to true, all containers will run in read-only mode.
	I0813 00:05:43.880702  826514 command_runner.go:124] > read_only = false
	I0813 00:05:43.880714  826514 command_runner.go:124] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0813 00:05:43.880727  826514 command_runner.go:124] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0813 00:05:43.880737  826514 command_runner.go:124] > # live configuration reload.
	I0813 00:05:43.880744  826514 command_runner.go:124] > log_level = "info"
	I0813 00:05:43.880753  826514 command_runner.go:124] > # Filter the log messages by the provided regular expression.
	I0813 00:05:43.880763  826514 command_runner.go:124] > # This option supports live configuration reload.
	I0813 00:05:43.880770  826514 command_runner.go:124] > log_filter = ""
	I0813 00:05:43.880781  826514 command_runner.go:124] > # The UID mappings for the user namespace of each container. A range is
	I0813 00:05:43.880794  826514 command_runner.go:124] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0813 00:05:43.880803  826514 command_runner.go:124] > # separated by comma.
	I0813 00:05:43.880810  826514 command_runner.go:124] > uid_mappings = ""
	I0813 00:05:43.880819  826514 command_runner.go:124] > # The GID mappings for the user namespace of each container. A range is
	I0813 00:05:43.880831  826514 command_runner.go:124] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0813 00:05:43.880840  826514 command_runner.go:124] > # separated by comma.
	I0813 00:05:43.880846  826514 command_runner.go:124] > gid_mappings = ""
	I0813 00:05:43.880858  826514 command_runner.go:124] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0813 00:05:43.880869  826514 command_runner.go:124] > # regarding the proper termination of the container. The lowest possible
	I0813 00:05:43.880881  826514 command_runner.go:124] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0813 00:05:43.880888  826514 command_runner.go:124] > ctr_stop_timeout = 30
	I0813 00:05:43.880899  826514 command_runner.go:124] > # manage_ns_lifecycle determines whether we pin and remove namespaces
	I0813 00:05:43.880910  826514 command_runner.go:124] > # and manage their lifecycle.
	I0813 00:05:43.880924  826514 command_runner.go:124] > # This option is being deprecated, and will be unconditionally true in the future.
	I0813 00:05:43.880934  826514 command_runner.go:124] > manage_ns_lifecycle = true
	I0813 00:05:43.880945  826514 command_runner.go:124] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0813 00:05:43.880957  826514 command_runner.go:124] > # when a pod does not have a private PID namespace, and does not use
	I0813 00:05:43.880967  826514 command_runner.go:124] > # a kernel separating runtime (like kata).
	I0813 00:05:43.880972  826514 command_runner.go:124] > # It requires manage_ns_lifecycle to be true.
	I0813 00:05:43.880978  826514 command_runner.go:124] > drop_infra_ctr = false
	I0813 00:05:43.880985  826514 command_runner.go:124] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0813 00:05:43.880994  826514 command_runner.go:124] > # You can use linux CPU list format to specify desired CPUs.
	I0813 00:05:43.881002  826514 command_runner.go:124] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0813 00:05:43.881008  826514 command_runner.go:124] > # infra_ctr_cpuset = ""
	I0813 00:05:43.881015  826514 command_runner.go:124] > # The directory where the state of the managed namespaces gets tracked.
	I0813 00:05:43.881023  826514 command_runner.go:124] > # Only used when manage_ns_lifecycle is true.
	I0813 00:05:43.881027  826514 command_runner.go:124] > namespaces_dir = "/var/run"
	I0813 00:05:43.881038  826514 command_runner.go:124] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0813 00:05:43.881045  826514 command_runner.go:124] > pinns_path = "/usr/bin/pinns"
	I0813 00:05:43.881051  826514 command_runner.go:124] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0813 00:05:43.881060  826514 command_runner.go:124] > # The name is matched against the runtimes map below. If this value is changed,
	I0813 00:05:43.881068  826514 command_runner.go:124] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0813 00:05:43.881073  826514 command_runner.go:124] > default_runtime = "runc"
	I0813 00:05:43.881081  826514 command_runner.go:124] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0813 00:05:43.881089  826514 command_runner.go:124] > # The runtime to use is picked based on the runtime_handler provided by the CRI.
	I0813 00:05:43.881096  826514 command_runner.go:124] > # If no runtime_handler is provided, the runtime will be picked based on the level
	I0813 00:05:43.881105  826514 command_runner.go:124] > # of trust of the workload. Each entry in the table should follow the format:
	I0813 00:05:43.881110  826514 command_runner.go:124] > #
	I0813 00:05:43.881114  826514 command_runner.go:124] > #[crio.runtime.runtimes.runtime-handler]
	I0813 00:05:43.881119  826514 command_runner.go:124] > #  runtime_path = "/path/to/the/executable"
	I0813 00:05:43.881126  826514 command_runner.go:124] > #  runtime_type = "oci"
	I0813 00:05:43.881131  826514 command_runner.go:124] > #  runtime_root = "/path/to/the/root"
	I0813 00:05:43.881138  826514 command_runner.go:124] > #  privileged_without_host_devices = false
	I0813 00:05:43.881142  826514 command_runner.go:124] > #  allowed_annotations = []
	I0813 00:05:43.881146  826514 command_runner.go:124] > # Where:
	I0813 00:05:43.881152  826514 command_runner.go:124] > # - runtime-handler: name used to identify the runtime
	I0813 00:05:43.881161  826514 command_runner.go:124] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0813 00:05:43.881168  826514 command_runner.go:124] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0813 00:05:43.881178  826514 command_runner.go:124] > #   the runtime executable name, and the runtime executable should be placed
	I0813 00:05:43.881185  826514 command_runner.go:124] > #   in $PATH.
	I0813 00:05:43.881192  826514 command_runner.go:124] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0813 00:05:43.881203  826514 command_runner.go:124] > #   omitted, an "oci" runtime is assumed.
	I0813 00:05:43.881209  826514 command_runner.go:124] > # - runtime_root (optional, string): root directory for storage of containers
	I0813 00:05:43.881215  826514 command_runner.go:124] > #   state.
	I0813 00:05:43.881222  826514 command_runner.go:124] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0813 00:05:43.881231  826514 command_runner.go:124] > #   host devices from being passed to privileged containers.
	I0813 00:05:43.881238  826514 command_runner.go:124] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0813 00:05:43.881250  826514 command_runner.go:124] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0813 00:05:43.881263  826514 command_runner.go:124] > #   The currently recognized values are:
	I0813 00:05:43.881273  826514 command_runner.go:124] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0813 00:05:43.881280  826514 command_runner.go:124] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0813 00:05:43.881287  826514 command_runner.go:124] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0813 00:05:43.881292  826514 command_runner.go:124] > [crio.runtime.runtimes.runc]
	I0813 00:05:43.881297  826514 command_runner.go:124] > runtime_path = "/usr/bin/runc"
	I0813 00:05:43.881301  826514 command_runner.go:124] > runtime_type = "oci"
	I0813 00:05:43.881307  826514 command_runner.go:124] > runtime_root = "/run/runc"
	I0813 00:05:43.881315  826514 command_runner.go:124] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0813 00:05:43.881320  826514 command_runner.go:124] > # running containers
	I0813 00:05:43.881324  826514 command_runner.go:124] > #[crio.runtime.runtimes.crun]
	I0813 00:05:43.881332  826514 command_runner.go:124] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0813 00:05:43.881338  826514 command_runner.go:124] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0813 00:05:43.881345  826514 command_runner.go:124] > # surface and mitigating the consequences of containers breakout.
	I0813 00:05:43.881351  826514 command_runner.go:124] > # Kata Containers with the default configured VMM
	I0813 00:05:43.881356  826514 command_runner.go:124] > #[crio.runtime.runtimes.kata-runtime]
	I0813 00:05:43.881361  826514 command_runner.go:124] > # Kata Containers with the QEMU VMM
	I0813 00:05:43.881366  826514 command_runner.go:124] > #[crio.runtime.runtimes.kata-qemu]
	I0813 00:05:43.881371  826514 command_runner.go:124] > # Kata Containers with the Firecracker VMM
	I0813 00:05:43.881378  826514 command_runner.go:124] > #[crio.runtime.runtimes.kata-fc]
	I0813 00:05:43.881385  826514 command_runner.go:124] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0813 00:05:43.881388  826514 command_runner.go:124] > #
	I0813 00:05:43.881394  826514 command_runner.go:124] > # CRI-O reads its configured registries defaults from the system wide
	I0813 00:05:43.881403  826514 command_runner.go:124] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0813 00:05:43.881409  826514 command_runner.go:124] > # you want to modify just CRI-O, you can change the registries configuration in
	I0813 00:05:43.881418  826514 command_runner.go:124] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0813 00:05:43.881424  826514 command_runner.go:124] > # use the system's defaults from /etc/containers/registries.conf.
	I0813 00:05:43.881428  826514 command_runner.go:124] > [crio.image]
	I0813 00:05:43.881434  826514 command_runner.go:124] > # Default transport for pulling images from a remote container storage.
	I0813 00:05:43.881439  826514 command_runner.go:124] > default_transport = "docker://"
	I0813 00:05:43.881446  826514 command_runner.go:124] > # The path to a file containing credentials necessary for pulling images from
	I0813 00:05:43.881453  826514 command_runner.go:124] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0813 00:05:43.881457  826514 command_runner.go:124] > global_auth_file = ""
	I0813 00:05:43.881462  826514 command_runner.go:124] > # The image used to instantiate infra containers.
	I0813 00:05:43.881468  826514 command_runner.go:124] > # This option supports live configuration reload.
	I0813 00:05:43.881473  826514 command_runner.go:124] > pause_image = "k8s.gcr.io/pause:3.4.1"
	I0813 00:05:43.881481  826514 command_runner.go:124] > # The path to a file containing credentials specific for pulling the pause_image from
	I0813 00:05:43.881488  826514 command_runner.go:124] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0813 00:05:43.881495  826514 command_runner.go:124] > # This option supports live configuration reload.
	I0813 00:05:43.881499  826514 command_runner.go:124] > pause_image_auth_file = ""
	I0813 00:05:43.881506  826514 command_runner.go:124] > # The command to run to have a container stay in the paused state.
	I0813 00:05:43.881512  826514 command_runner.go:124] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0813 00:05:43.881519  826514 command_runner.go:124] > # specified in the pause image. When commented out, it will fallback to the
	I0813 00:05:43.881525  826514 command_runner.go:124] > # default: "/pause". This option supports live configuration reload.
	I0813 00:05:43.881530  826514 command_runner.go:124] > pause_command = "/pause"
	I0813 00:05:43.881537  826514 command_runner.go:124] > # Path to the file which decides what sort of policy we use when deciding
	I0813 00:05:43.881546  826514 command_runner.go:124] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0813 00:05:43.881552  826514 command_runner.go:124] > # this option be used, as the default behavior of using the system-wide default
	I0813 00:05:43.881560  826514 command_runner.go:124] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0813 00:05:43.881565  826514 command_runner.go:124] > # refer to containers-policy.json(5) for more details.
	I0813 00:05:43.881570  826514 command_runner.go:124] > signature_policy = ""
	I0813 00:05:43.881576  826514 command_runner.go:124] > # List of registries to skip TLS verification for pulling images. Please
	I0813 00:05:43.881585  826514 command_runner.go:124] > # consider configuring the registries via /etc/containers/registries.conf before
	I0813 00:05:43.881589  826514 command_runner.go:124] > # changing them here.
	I0813 00:05:43.881595  826514 command_runner.go:124] > #insecure_registries = "[]"
	I0813 00:05:43.881602  826514 command_runner.go:124] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0813 00:05:43.881609  826514 command_runner.go:124] > # ignore; the latter will ignore volumes entirely.
	I0813 00:05:43.881613  826514 command_runner.go:124] > image_volumes = "mkdir"
	I0813 00:05:43.881619  826514 command_runner.go:124] > # List of registries to be used when pulling an unqualified image (e.g.,
	I0813 00:05:43.881626  826514 command_runner.go:124] > # "alpine:latest"). By default, registries is set to "docker.io" for
	I0813 00:05:43.881633  826514 command_runner.go:124] > # compatibility reasons. Depending on your workload and usecase you may add more
	I0813 00:05:43.881642  826514 command_runner.go:124] > # registries (e.g., "quay.io", "registry.fedoraproject.org",
	I0813 00:05:43.881646  826514 command_runner.go:124] > # "registry.opensuse.org", etc.).
	I0813 00:05:43.881650  826514 command_runner.go:124] > #registries = [
	I0813 00:05:43.881654  826514 command_runner.go:124] > # 	"docker.io",
	I0813 00:05:43.881657  826514 command_runner.go:124] > #]
	I0813 00:05:43.881662  826514 command_runner.go:124] > # Temporary directory to use for storing big files
	I0813 00:05:43.881668  826514 command_runner.go:124] > big_files_temporary_dir = ""
	I0813 00:05:43.881674  826514 command_runner.go:124] > # The crio.network table containers settings pertaining to the management of
	I0813 00:05:43.881682  826514 command_runner.go:124] > # CNI plugins.
	I0813 00:05:43.881686  826514 command_runner.go:124] > [crio.network]
	I0813 00:05:43.881697  826514 command_runner.go:124] > # The default CNI network name to be selected. If not set or "", then
	I0813 00:05:43.881703  826514 command_runner.go:124] > # CRI-O will pick-up the first one found in network_dir.
	I0813 00:05:43.881708  826514 command_runner.go:124] > # cni_default_network = "kindnet"
	I0813 00:05:43.881714  826514 command_runner.go:124] > # Path to the directory where CNI configuration files are located.
	I0813 00:05:43.881718  826514 command_runner.go:124] > network_dir = "/etc/cni/net.d/"
	I0813 00:05:43.881726  826514 command_runner.go:124] > # Paths to directories where CNI plugin binaries are located.
	I0813 00:05:43.881731  826514 command_runner.go:124] > plugin_dirs = [
	I0813 00:05:43.881735  826514 command_runner.go:124] > 	"/opt/cni/bin/",
	I0813 00:05:43.881738  826514 command_runner.go:124] > ]
	I0813 00:05:43.881744  826514 command_runner.go:124] > # A necessary configuration for Prometheus based metrics retrieval
	I0813 00:05:43.881751  826514 command_runner.go:124] > [crio.metrics]
	I0813 00:05:43.881756  826514 command_runner.go:124] > # Globally enable or disable metrics support.
	I0813 00:05:43.881759  826514 command_runner.go:124] > enable_metrics = true
	I0813 00:05:43.881766  826514 command_runner.go:124] > # The port on which the metrics server will listen.
	I0813 00:05:43.881770  826514 command_runner.go:124] > metrics_port = 9090
	I0813 00:05:43.881811  826514 command_runner.go:124] > # Local socket path to bind the metrics server to
	I0813 00:05:43.881819  826514 command_runner.go:124] > metrics_socket = ""
	I0813 00:05:43.881887  826514 cni.go:93] Creating CNI manager for ""
	I0813 00:05:43.881896  826514 cni.go:154] 2 nodes found, recommending kindnet
	I0813 00:05:43.881906  826514 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 00:05:43.881919  826514 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.152 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-20210813000359-820289 NodeName:multinode-20210813000359-820289-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.22"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.39.152 CgroupDriver:systemd ClientC
AFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 00:05:43.882032  826514 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.152
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "multinode-20210813000359-820289-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.152
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.22"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 00:05:43.882100  826514 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --hostname-override=multinode-20210813000359-820289-m02 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.152 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:multinode-20210813000359-820289 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 00:05:43.882145  826514 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 00:05:43.889210  826514 command_runner.go:124] > kubeadm
	I0813 00:05:43.889221  826514 command_runner.go:124] > kubectl
	I0813 00:05:43.889225  826514 command_runner.go:124] > kubelet
	I0813 00:05:43.889601  826514 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 00:05:43.889657  826514 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0813 00:05:43.896461  826514 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (516 bytes)
	I0813 00:05:43.907373  826514 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 00:05:43.918272  826514 ssh_runner.go:149] Run: grep 192.168.39.22	control-plane.minikube.internal$ /etc/hosts
	I0813 00:05:43.921976  826514 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.22	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 00:05:43.932277  826514 host.go:66] Checking if "multinode-20210813000359-820289" exists ...
	I0813 00:05:43.932613  826514 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 00:05:43.932647  826514 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 00:05:43.943596  826514 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37109
	I0813 00:05:43.944060  826514 main.go:130] libmachine: () Calling .GetVersion
	I0813 00:05:43.944545  826514 main.go:130] libmachine: Using API Version  1
	I0813 00:05:43.944568  826514 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 00:05:43.944901  826514 main.go:130] libmachine: () Calling .GetMachineName
	I0813 00:05:43.945107  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .DriverName
	I0813 00:05:43.945221  826514 start.go:241] JoinCluster: &{Name:multinode-20210813000359-820289 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12122/minikube-v1.22.0-1628238775-12122.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:multinode-20
210813000359-820289 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.22 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.152 Port:0 KubernetesVersion:v1.21.3 ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0}
	I0813 00:05:43.945312  826514 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm token create --print-join-command --ttl=0"
	I0813 00:05:43.945327  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHHostname
	I0813 00:05:43.950486  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:05:43.950875  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e4:55", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:04:13 +0000 UTC Type:0 Mac:52:54:00:b5:e4:55 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:multinode-20210813000359-820289 Clientid:01:52:54:00:b5:e4:55}
	I0813 00:05:43.950907  826514 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined IP address 192.168.39.22 and MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:05:43.951073  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHPort
	I0813 00:05:43.951237  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHKeyPath
	I0813 00:05:43.951421  826514 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHUsername
	I0813 00:05:43.951570  826514 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813000359-820289/id_rsa Username:docker}
	I0813 00:05:44.148226  826514 command_runner.go:124] > kubeadm join control-plane.minikube.internal:8443 --token sgcbtd.wvx1ip84hjdcplgt --discovery-token-ca-cert-hash sha256:a2926d5accd9a2d1096d4e62979978bfd5a94255856b68d015a34969efd36535 
	I0813 00:05:44.151001  826514 start.go:262] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.152 Port:0 KubernetesVersion:v1.21.3 ControlPlane:false Worker:true}
	I0813 00:05:44.151048  826514 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm join control-plane.minikube.internal:8443 --token sgcbtd.wvx1ip84hjdcplgt --discovery-token-ca-cert-hash sha256:a2926d5accd9a2d1096d4e62979978bfd5a94255856b68d015a34969efd36535 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-20210813000359-820289-m02"
	I0813 00:05:44.270484  826514 command_runner.go:124] > [preflight] Running pre-flight checks
	I0813 00:05:44.590024  826514 command_runner.go:124] > [preflight] Reading configuration from the cluster...
	I0813 00:05:44.590068  826514 command_runner.go:124] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0813 00:05:44.654543  826514 command_runner.go:124] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0813 00:05:44.655134  826514 command_runner.go:124] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0813 00:05:44.655195  826514 command_runner.go:124] > [kubelet-start] Starting the kubelet
	I0813 00:05:44.847085  826514 command_runner.go:124] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0813 00:05:50.981570  826514 command_runner.go:124] > This node has joined the cluster:
	I0813 00:05:50.981609  826514 command_runner.go:124] > * Certificate signing request was sent to apiserver and a response was received.
	I0813 00:05:50.981619  826514 command_runner.go:124] > * The Kubelet was informed of the new secure connection details.
	I0813 00:05:50.981631  826514 command_runner.go:124] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0813 00:05:50.983121  826514 command_runner.go:124] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0813 00:05:50.983155  826514 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm join control-plane.minikube.internal:8443 --token sgcbtd.wvx1ip84hjdcplgt --discovery-token-ca-cert-hash sha256:a2926d5accd9a2d1096d4e62979978bfd5a94255856b68d015a34969efd36535 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-20210813000359-820289-m02": (6.83208802s)
	I0813 00:05:50.983183  826514 ssh_runner.go:149] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0813 00:05:51.339797  826514 command_runner.go:124] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0813 00:05:51.339946  826514 start.go:243] JoinCluster complete in 7.394719912s
	I0813 00:05:51.339976  826514 cni.go:93] Creating CNI manager for ""
	I0813 00:05:51.339984  826514 cni.go:154] 2 nodes found, recommending kindnet
	I0813 00:05:51.340055  826514 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0813 00:05:51.345313  826514 command_runner.go:124] >   File: /opt/cni/bin/portmap
	I0813 00:05:51.345337  826514 command_runner.go:124] >   Size: 2853400   	Blocks: 5576       IO Block: 4096   regular file
	I0813 00:05:51.345347  826514 command_runner.go:124] > Device: 10h/16d	Inode: 22646       Links: 1
	I0813 00:05:51.345363  826514 command_runner.go:124] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0813 00:05:51.345371  826514 command_runner.go:124] > Access: 2021-08-13 00:04:13.266164804 +0000
	I0813 00:05:51.345380  826514 command_runner.go:124] > Modify: 2021-08-06 09:23:24.000000000 +0000
	I0813 00:05:51.345387  826514 command_runner.go:124] > Change: 2021-08-13 00:04:09.548164804 +0000
	I0813 00:05:51.345394  826514 command_runner.go:124] >  Birth: -
	I0813 00:05:51.345719  826514 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0813 00:05:51.345734  826514 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0813 00:05:51.361711  826514 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0813 00:05:51.683551  826514 command_runner.go:124] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0813 00:05:51.685943  826514 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0813 00:05:51.689157  826514 command_runner.go:124] > serviceaccount/kindnet unchanged
	I0813 00:05:51.709844  826514 command_runner.go:124] > daemonset.apps/kindnet configured
	I0813 00:05:51.712026  826514 start.go:226] Will wait 6m0s for node &{Name:m02 IP:192.168.39.152 Port:0 KubernetesVersion:v1.21.3 ControlPlane:false Worker:true}
	I0813 00:05:51.714238  826514 out.go:177] * Verifying Kubernetes components...
	I0813 00:05:51.714326  826514 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 00:05:51.725966  826514 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0813 00:05:51.726187  826514 kapi.go:59] client config for multinode-20210813000359-820289: &rest.Config{Host:"https://192.168.39.22:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359-820289/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813000359
-820289/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 00:05:51.727523  826514 node_ready.go:35] waiting up to 6m0s for node "multinode-20210813000359-820289-m02" to be "Ready" ...
	I0813 00:05:51.727594  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289-m02
	I0813 00:05:51.727602  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:51.727608  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:51.727612  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:51.729926  826514 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:05:51.729943  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:51.729949  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:51.729953  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:51.729958  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:51.729962  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:51.729966  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:51 GMT
	I0813 00:05:51.730254  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289-m02","uid":"cde91947-f0b9-43a6-9224-431ab64254d1","resourceVersion":"558","creationTimestamp":"2021-08-13T00:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"
v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{" [truncated 5722 chars]
	I0813 00:05:51.730504  826514 node_ready.go:49] node "multinode-20210813000359-820289-m02" has status "Ready":"True"
	I0813 00:05:51.730516  826514 node_ready.go:38] duration metric: took 2.972685ms waiting for node "multinode-20210813000359-820289-m02" to be "Ready" ...
	I0813 00:05:51.730526  826514 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 00:05:51.730589  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods
	I0813 00:05:51.730599  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:51.730606  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:51.730612  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:51.737725  826514 round_trippers.go:457] Response Status: 200 OK in 7 milliseconds
	I0813 00:05:51.737744  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:51.737751  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:51.737755  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:51.737760  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:51.737764  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:51.737769  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:51 GMT
	I0813 00:05:51.740103  826514 request.go:1123] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"559"},"items":[{"metadata":{"name":"coredns-558bd4d5db-sstrb","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"16f6c77d-26a2-47e7-9c19-74736961cc13","resourceVersion":"480","creationTimestamp":"2021-08-13T00:05:06Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"65fbbea7-e9b5-486b-a329-69ff09468fc0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65fbbea7-e9b5-486b-a329-69ff09468fc0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller"
:{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k: [truncated 63733 chars]
	I0813 00:05:51.741507  826514 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-sstrb" in "kube-system" namespace to be "Ready" ...
	I0813 00:05:51.741589  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-sstrb
	I0813 00:05:51.741598  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:51.741603  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:51.741607  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:51.743554  826514 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:05:51.743570  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:51.743576  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:51 GMT
	I0813 00:05:51.743581  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:51.743586  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:51.743590  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:51.743594  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:51.743799  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-sstrb","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"16f6c77d-26a2-47e7-9c19-74736961cc13","resourceVersion":"480","creationTimestamp":"2021-08-13T00:05:06Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"65fbbea7-e9b5-486b-a329-69ff09468fc0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65fbbea7-e9b5-486b-a329-69ff09468fc0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5734 chars]
	I0813 00:05:51.744173  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289
	I0813 00:05:51.744188  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:51.744195  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:51.744202  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:51.745944  826514 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:05:51.745953  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:51.745958  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:51.745964  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:51 GMT
	I0813 00:05:51.745968  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:51.745973  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:51.745977  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:51.746186  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","resourceVersion":"378","creationTimestamp":"2021-08-13T00:04:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813000359-820289","minikube.k8s.io/updated_at":"2021_08_13T00_04_54_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6557 chars]
	I0813 00:05:51.746456  826514 pod_ready.go:92] pod "coredns-558bd4d5db-sstrb" in "kube-system" namespace has status "Ready":"True"
	I0813 00:05:51.746468  826514 pod_ready.go:81] duration metric: took 4.942023ms waiting for pod "coredns-558bd4d5db-sstrb" in "kube-system" namespace to be "Ready" ...
	I0813 00:05:51.746476  826514 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-20210813000359-820289" in "kube-system" namespace to be "Ready" ...
	I0813 00:05:51.746532  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210813000359-820289
	I0813 00:05:51.746543  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:51.746550  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:51.746556  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:51.748676  826514 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:05:51.748684  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:51.748691  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:51.748694  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:51 GMT
	I0813 00:05:51.748697  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:51.748703  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:51.748706  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:51.748850  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210813000359-820289","namespace":"kube-system","uid":"2d8ff24a-3267-4d8b-a528-3da3d3b70e54","resourceVersion":"330","creationTimestamp":"2021-08-13T00:04:59Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.22:2379","kubernetes.io/config.hash":"bc4647bddd439c8d0983a3b358a72513","kubernetes.io/config.mirror":"bc4647bddd439c8d0983a3b358a72513","kubernetes.io/config.seen":"2021-08-13T00:04:59.185501301Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.
kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.ha [truncated 5574 chars]
	I0813 00:05:51.749128  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289
	I0813 00:05:51.749141  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:51.749146  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:51.749149  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:51.750745  826514 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:05:51.750761  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:51.750768  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:51.750773  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:51 GMT
	I0813 00:05:51.750778  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:51.750782  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:51.750786  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:51.750970  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","resourceVersion":"378","creationTimestamp":"2021-08-13T00:04:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813000359-820289","minikube.k8s.io/updated_at":"2021_08_13T00_04_54_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6557 chars]
	I0813 00:05:51.751255  826514 pod_ready.go:92] pod "etcd-multinode-20210813000359-820289" in "kube-system" namespace has status "Ready":"True"
	I0813 00:05:51.751274  826514 pod_ready.go:81] duration metric: took 4.790813ms waiting for pod "etcd-multinode-20210813000359-820289" in "kube-system" namespace to be "Ready" ...
	I0813 00:05:51.751293  826514 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-20210813000359-820289" in "kube-system" namespace to be "Ready" ...
	I0813 00:05:51.751353  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20210813000359-820289
	I0813 00:05:51.751365  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:51.751372  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:51.751378  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:51.753644  826514 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:05:51.753654  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:51.753658  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:51.753661  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:51.753664  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:51.753667  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:51.753670  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:51 GMT
	I0813 00:05:51.753845  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20210813000359-820289","namespace":"kube-system","uid":"b5954b4a-9e51-488b-a0fa-cacb7de86621","resourceVersion":"450","creationTimestamp":"2021-08-13T00:04:59Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.22:8443","kubernetes.io/config.hash":"e0dcc263218298eb0bc9dd91ad6c2c6d","kubernetes.io/config.mirror":"e0dcc263218298eb0bc9dd91ad6c2c6d","kubernetes.io/config.seen":"2021-08-13T00:04:59.185603315Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-addre [truncated 7252 chars]
	I0813 00:05:51.754159  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289
	I0813 00:05:51.754172  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:51.754177  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:51.754180  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:51.756001  826514 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:05:51.756019  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:51.756025  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:51 GMT
	I0813 00:05:51.756030  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:51.756034  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:51.756038  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:51.756045  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:51.756507  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","resourceVersion":"378","creationTimestamp":"2021-08-13T00:04:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813000359-820289","minikube.k8s.io/updated_at":"2021_08_13T00_04_54_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6557 chars]
	I0813 00:05:51.756781  826514 pod_ready.go:92] pod "kube-apiserver-multinode-20210813000359-820289" in "kube-system" namespace has status "Ready":"True"
	I0813 00:05:51.756796  826514 pod_ready.go:81] duration metric: took 5.493253ms waiting for pod "kube-apiserver-multinode-20210813000359-820289" in "kube-system" namespace to be "Ready" ...
	I0813 00:05:51.756807  826514 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-20210813000359-820289" in "kube-system" namespace to be "Ready" ...
	I0813 00:05:51.756862  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210813000359-820289
	I0813 00:05:51.756873  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:51.756879  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:51.756885  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:51.758693  826514 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:05:51.758708  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:51.758711  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:51.758714  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:51.758717  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:51.758720  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:51.758723  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:51 GMT
	I0813 00:05:51.758893  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20210813000359-820289","namespace":"kube-system","uid":"f25a529b-df04-44a7-aa11-5f04f8acaaf9","resourceVersion":"452","creationTimestamp":"2021-08-13T00:04:53Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8e7d0bc335d72432dc6bd22d4541dfbd","kubernetes.io/config.mirror":"8e7d0bc335d72432dc6bd22d4541dfbd","kubernetes.io/config.seen":"2021-08-13T00:04:42.246742645Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con
fig.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config [truncated 6813 chars]
	I0813 00:05:51.759159  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289
	I0813 00:05:51.759171  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:51.759175  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:51.759180  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:51.761299  826514 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:05:51.761309  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:51.761312  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:51 GMT
	I0813 00:05:51.761315  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:51.761320  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:51.761325  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:51.761328  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:51.761468  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","resourceVersion":"378","creationTimestamp":"2021-08-13T00:04:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813000359-820289","minikube.k8s.io/updated_at":"2021_08_13T00_04_54_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6557 chars]
	I0813 00:05:51.761768  826514 pod_ready.go:92] pod "kube-controller-manager-multinode-20210813000359-820289" in "kube-system" namespace has status "Ready":"True"
	I0813 00:05:51.761779  826514 pod_ready.go:81] duration metric: took 4.964506ms waiting for pod "kube-controller-manager-multinode-20210813000359-820289" in "kube-system" namespace to be "Ready" ...
	I0813 00:05:51.761789  826514 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8h4t8" in "kube-system" namespace to be "Ready" ...
	I0813 00:05:51.928175  826514 request.go:600] Waited for 166.312095ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8h4t8
	I0813 00:05:51.928236  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8h4t8
	I0813 00:05:51.928242  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:51.928248  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:51.928252  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:51.931345  826514 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 00:05:51.931364  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:51.931370  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:51.931375  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:51.931379  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:51.931384  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:51 GMT
	I0813 00:05:51.931389  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:51.931529  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8h4t8","generateName":"kube-proxy-","namespace":"kube-system","uid":"414dc4de-68ad-43b2-9847-aa0917e6d69d","resourceVersion":"547","creationTimestamp":"2021-08-13T00:05:50Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e61f546c-a1b1-4412-af08-7c2ebe78d772","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e61f546c-a1b1-4412-af08-7c2ebe78d772\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 4297 chars]
	I0813 00:05:52.128272  826514 request.go:600] Waited for 196.347658ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289-m02
	I0813 00:05:52.128331  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289-m02
	I0813 00:05:52.128338  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:52.128346  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:52.128352  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:52.131528  826514 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 00:05:52.131547  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:52.131551  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:52.131555  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:52 GMT
	I0813 00:05:52.131558  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:52.131564  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:52.131570  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:52.131753  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289-m02","uid":"cde91947-f0b9-43a6-9224-431ab64254d1","resourceVersion":"558","creationTimestamp":"2021-08-13T00:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"
v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{" [truncated 5722 chars]
	I0813 00:05:52.632912  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8h4t8
	I0813 00:05:52.632944  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:52.632952  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:52.632957  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:52.635329  826514 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:05:52.635350  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:52.635356  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:52.635361  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:52.635368  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:52.635372  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:52.635377  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:52 GMT
	I0813 00:05:52.635523  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8h4t8","generateName":"kube-proxy-","namespace":"kube-system","uid":"414dc4de-68ad-43b2-9847-aa0917e6d69d","resourceVersion":"547","creationTimestamp":"2021-08-13T00:05:50Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e61f546c-a1b1-4412-af08-7c2ebe78d772","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e61f546c-a1b1-4412-af08-7c2ebe78d772\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 4297 chars]
	I0813 00:05:52.635890  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289-m02
	I0813 00:05:52.635908  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:52.635916  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:52.635921  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:52.637983  826514 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:05:52.638007  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:52.638013  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:52 GMT
	I0813 00:05:52.638018  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:52.638023  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:52.638027  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:52.638031  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:52.638234  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289-m02","uid":"cde91947-f0b9-43a6-9224-431ab64254d1","resourceVersion":"558","creationTimestamp":"2021-08-13T00:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"
v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{" [truncated 5722 chars]
	I0813 00:05:53.133010  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8h4t8
	I0813 00:05:53.133038  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:53.133044  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:53.133048  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:53.135834  826514 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:05:53.135853  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:53.135859  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:53.135864  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:53 GMT
	I0813 00:05:53.135869  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:53.135873  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:53.135877  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:53.136279  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8h4t8","generateName":"kube-proxy-","namespace":"kube-system","uid":"414dc4de-68ad-43b2-9847-aa0917e6d69d","resourceVersion":"562","creationTimestamp":"2021-08-13T00:05:50Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e61f546c-a1b1-4412-af08-7c2ebe78d772","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e61f546c-a1b1-4412-af08-7c2ebe78d772\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 5807 chars]
	I0813 00:05:53.136602  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289-m02
	I0813 00:05:53.136615  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:53.136620  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:53.136624  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:53.139811  826514 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 00:05:53.139828  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:53.139833  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:53.139838  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:53.139842  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:53.139847  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:53.139852  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:53 GMT
	I0813 00:05:53.140290  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289-m02","uid":"cde91947-f0b9-43a6-9224-431ab64254d1","resourceVersion":"558","creationTimestamp":"2021-08-13T00:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"
v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{" [truncated 5722 chars]
	I0813 00:05:53.633013  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8h4t8
	I0813 00:05:53.633037  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:53.633049  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:53.633053  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:53.639537  826514 round_trippers.go:457] Response Status: 200 OK in 6 milliseconds
	I0813 00:05:53.639556  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:53.639561  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:53.639564  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:53.639567  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:53.639570  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:53.639573  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:53 GMT
	I0813 00:05:53.639680  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8h4t8","generateName":"kube-proxy-","namespace":"kube-system","uid":"414dc4de-68ad-43b2-9847-aa0917e6d69d","resourceVersion":"562","creationTimestamp":"2021-08-13T00:05:50Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e61f546c-a1b1-4412-af08-7c2ebe78d772","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e61f546c-a1b1-4412-af08-7c2ebe78d772\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 5807 chars]
	I0813 00:05:53.640027  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289-m02
	I0813 00:05:53.640042  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:53.640047  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:53.640051  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:53.642477  826514 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:05:53.642492  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:53.642497  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:53.642502  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:53.642506  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:53 GMT
	I0813 00:05:53.642511  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:53.642516  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:53.642705  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289-m02","uid":"cde91947-f0b9-43a6-9224-431ab64254d1","resourceVersion":"558","creationTimestamp":"2021-08-13T00:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"
v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{" [truncated 5722 chars]
	I0813 00:05:54.132933  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8h4t8
	I0813 00:05:54.132959  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:54.132965  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:54.132969  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:54.136486  826514 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 00:05:54.136505  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:54.136511  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:54.136516  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:54.136520  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:54.136524  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:54.136528  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:54 GMT
	I0813 00:05:54.136745  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8h4t8","generateName":"kube-proxy-","namespace":"kube-system","uid":"414dc4de-68ad-43b2-9847-aa0917e6d69d","resourceVersion":"562","creationTimestamp":"2021-08-13T00:05:50Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e61f546c-a1b1-4412-af08-7c2ebe78d772","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e61f546c-a1b1-4412-af08-7c2ebe78d772\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 5807 chars]
	I0813 00:05:54.137188  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289-m02
	I0813 00:05:54.137209  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:54.137214  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:54.137218  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:54.142622  826514 round_trippers.go:457] Response Status: 200 OK in 5 milliseconds
	I0813 00:05:54.142636  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:54.142641  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:54.142646  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:54.142649  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:54.142653  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:54.142657  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:54 GMT
	I0813 00:05:54.142896  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289-m02","uid":"cde91947-f0b9-43a6-9224-431ab64254d1","resourceVersion":"558","creationTimestamp":"2021-08-13T00:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"
v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{" [truncated 5722 chars]
	I0813 00:05:54.143117  826514 pod_ready.go:102] pod "kube-proxy-8h4t8" in "kube-system" namespace has status "Ready":"False"
	I0813 00:05:54.633164  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8h4t8
	I0813 00:05:54.633185  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:54.633194  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:54.633198  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:54.636463  826514 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 00:05:54.636481  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:54.636485  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:54.636489  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:54.636492  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:54.636495  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:54.636498  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:54 GMT
	I0813 00:05:54.637004  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8h4t8","generateName":"kube-proxy-","namespace":"kube-system","uid":"414dc4de-68ad-43b2-9847-aa0917e6d69d","resourceVersion":"562","creationTimestamp":"2021-08-13T00:05:50Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e61f546c-a1b1-4412-af08-7c2ebe78d772","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e61f546c-a1b1-4412-af08-7c2ebe78d772\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 5807 chars]
	I0813 00:05:54.637399  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289-m02
	I0813 00:05:54.637413  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:54.637418  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:54.637422  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:54.639553  826514 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:05:54.639565  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:54.639571  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:54.639576  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:54.639580  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:54.639585  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:54 GMT
	I0813 00:05:54.639590  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:54.639782  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289-m02","uid":"cde91947-f0b9-43a6-9224-431ab64254d1","resourceVersion":"558","creationTimestamp":"2021-08-13T00:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"
v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{" [truncated 5722 chars]
	I0813 00:05:55.132441  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8h4t8
	I0813 00:05:55.132468  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:55.132474  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:55.132478  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:55.135768  826514 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 00:05:55.135787  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:55.135792  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:55.135795  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:55.135798  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:55.135804  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:55.135807  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:55 GMT
	I0813 00:05:55.135897  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8h4t8","generateName":"kube-proxy-","namespace":"kube-system","uid":"414dc4de-68ad-43b2-9847-aa0917e6d69d","resourceVersion":"562","creationTimestamp":"2021-08-13T00:05:50Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e61f546c-a1b1-4412-af08-7c2ebe78d772","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e61f546c-a1b1-4412-af08-7c2ebe78d772\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 5807 chars]
	I0813 00:05:55.136231  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289-m02
	I0813 00:05:55.136245  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:55.136250  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:55.136254  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:55.139466  826514 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 00:05:55.139486  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:55.139493  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:55.139497  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:55 GMT
	I0813 00:05:55.139502  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:55.139507  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:55.139514  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:55.139645  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289-m02","uid":"cde91947-f0b9-43a6-9224-431ab64254d1","resourceVersion":"558","creationTimestamp":"2021-08-13T00:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"
v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{" [truncated 5722 chars]
	I0813 00:05:55.633008  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8h4t8
	I0813 00:05:55.633033  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:55.633039  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:55.633044  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:55.638091  826514 round_trippers.go:457] Response Status: 200 OK in 5 milliseconds
	I0813 00:05:55.638153  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:55.638167  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:55 GMT
	I0813 00:05:55.638173  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:55.638179  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:55.638185  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:55.638191  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:55.639141  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8h4t8","generateName":"kube-proxy-","namespace":"kube-system","uid":"414dc4de-68ad-43b2-9847-aa0917e6d69d","resourceVersion":"562","creationTimestamp":"2021-08-13T00:05:50Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e61f546c-a1b1-4412-af08-7c2ebe78d772","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e61f546c-a1b1-4412-af08-7c2ebe78d772\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 5807 chars]
	I0813 00:05:55.639505  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289-m02
	I0813 00:05:55.639519  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:55.639524  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:55.639528  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:55.641735  826514 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:05:55.641755  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:55.641762  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:55.641767  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:55.641771  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:55.641775  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:55.641780  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:55 GMT
	I0813 00:05:55.641954  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289-m02","uid":"cde91947-f0b9-43a6-9224-431ab64254d1","resourceVersion":"558","creationTimestamp":"2021-08-13T00:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"
v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{" [truncated 5722 chars]
	I0813 00:05:56.132627  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8h4t8
	I0813 00:05:56.132653  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:56.132659  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:56.132663  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:56.136208  826514 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 00:05:56.136270  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:56.136286  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:56.136292  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:56.136297  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:56.136302  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:56.136306  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:56 GMT
	I0813 00:05:56.136457  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8h4t8","generateName":"kube-proxy-","namespace":"kube-system","uid":"414dc4de-68ad-43b2-9847-aa0917e6d69d","resourceVersion":"562","creationTimestamp":"2021-08-13T00:05:50Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e61f546c-a1b1-4412-af08-7c2ebe78d772","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e61f546c-a1b1-4412-af08-7c2ebe78d772\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 5807 chars]
	I0813 00:05:56.136835  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289-m02
	I0813 00:05:56.136851  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:56.136857  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:56.136863  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:56.141514  826514 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0813 00:05:56.141531  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:56.141537  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:56 GMT
	I0813 00:05:56.141542  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:56.141546  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:56.141550  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:56.141554  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:56.141962  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289-m02","uid":"cde91947-f0b9-43a6-9224-431ab64254d1","resourceVersion":"565","creationTimestamp":"2021-08-13T00:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"20
21-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{" [truncated 5602 chars]
	I0813 00:05:56.633288  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8h4t8
	I0813 00:05:56.633311  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:56.633316  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:56.633320  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:56.642768  826514 round_trippers.go:457] Response Status: 200 OK in 9 milliseconds
	I0813 00:05:56.642789  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:56.642794  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:56.642797  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:56.642801  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:56 GMT
	I0813 00:05:56.642804  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:56.642807  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:56.643101  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8h4t8","generateName":"kube-proxy-","namespace":"kube-system","uid":"414dc4de-68ad-43b2-9847-aa0917e6d69d","resourceVersion":"562","creationTimestamp":"2021-08-13T00:05:50Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e61f546c-a1b1-4412-af08-7c2ebe78d772","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e61f546c-a1b1-4412-af08-7c2ebe78d772\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 5807 chars]
	I0813 00:05:56.643464  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289-m02
	I0813 00:05:56.643479  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:56.643484  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:56.643488  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:56.645990  826514 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:05:56.646007  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:56.646013  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:56 GMT
	I0813 00:05:56.646016  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:56.646020  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:56.646026  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:56.646030  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:56.646340  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289-m02","uid":"cde91947-f0b9-43a6-9224-431ab64254d1","resourceVersion":"565","creationTimestamp":"2021-08-13T00:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"20
21-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{" [truncated 5602 chars]
	I0813 00:05:56.646717  826514 pod_ready.go:102] pod "kube-proxy-8h4t8" in "kube-system" namespace has status "Ready":"False"
	I0813 00:05:57.132773  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8h4t8
	I0813 00:05:57.132820  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:57.132830  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:57.132837  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:57.137480  826514 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0813 00:05:57.137503  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:57.137509  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:57.137512  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:57.137516  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:57 GMT
	I0813 00:05:57.137519  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:57.137525  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:57.137905  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8h4t8","generateName":"kube-proxy-","namespace":"kube-system","uid":"414dc4de-68ad-43b2-9847-aa0917e6d69d","resourceVersion":"562","creationTimestamp":"2021-08-13T00:05:50Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e61f546c-a1b1-4412-af08-7c2ebe78d772","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e61f546c-a1b1-4412-af08-7c2ebe78d772\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 5807 chars]
	I0813 00:05:57.138258  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289-m02
	I0813 00:05:57.138273  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:57.138280  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:57.138286  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:57.141905  826514 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 00:05:57.141918  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:57.141922  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:57.141926  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:57.141929  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:57.141932  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:57.141936  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:57 GMT
	I0813 00:05:57.142359  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289-m02","uid":"cde91947-f0b9-43a6-9224-431ab64254d1","resourceVersion":"565","creationTimestamp":"2021-08-13T00:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"20
21-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{" [truncated 5602 chars]
	I0813 00:05:57.633116  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8h4t8
	I0813 00:05:57.633147  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:57.633152  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:57.633156  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:57.635479  826514 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:05:57.635499  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:57.635504  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:57.635507  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:57.635510  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:57.635513  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:57 GMT
	I0813 00:05:57.635516  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:57.635841  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8h4t8","generateName":"kube-proxy-","namespace":"kube-system","uid":"414dc4de-68ad-43b2-9847-aa0917e6d69d","resourceVersion":"562","creationTimestamp":"2021-08-13T00:05:50Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e61f546c-a1b1-4412-af08-7c2ebe78d772","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e61f546c-a1b1-4412-af08-7c2ebe78d772\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 5807 chars]
	I0813 00:05:57.636190  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289-m02
	I0813 00:05:57.636206  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:57.636212  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:57.636216  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:57.639337  826514 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 00:05:57.639349  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:57.639353  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:57.639355  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:57.639358  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:57.639361  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:57.639368  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:57 GMT
	I0813 00:05:57.639821  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289-m02","uid":"cde91947-f0b9-43a6-9224-431ab64254d1","resourceVersion":"565","creationTimestamp":"2021-08-13T00:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"20
21-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{" [truncated 5602 chars]
	I0813 00:05:58.132484  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8h4t8
	I0813 00:05:58.132508  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:58.132515  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:58.132520  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:58.136086  826514 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 00:05:58.136105  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:58.136111  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:58.136116  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:58.136120  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:58.136124  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:58.136128  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:58 GMT
	I0813 00:05:58.136519  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8h4t8","generateName":"kube-proxy-","namespace":"kube-system","uid":"414dc4de-68ad-43b2-9847-aa0917e6d69d","resourceVersion":"571","creationTimestamp":"2021-08-13T00:05:50Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e61f546c-a1b1-4412-af08-7c2ebe78d772","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e61f546c-a1b1-4412-af08-7c2ebe78d772\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 5772 chars]
	I0813 00:05:58.136897  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289-m02
	I0813 00:05:58.136918  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:58.136925  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:58.136931  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:58.140061  826514 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 00:05:58.140080  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:58.140087  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:58.140092  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:58.140096  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:58.140101  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:58 GMT
	I0813 00:05:58.140105  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:58.140501  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289-m02","uid":"cde91947-f0b9-43a6-9224-431ab64254d1","resourceVersion":"565","creationTimestamp":"2021-08-13T00:05:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"20
21-08-13T00:05:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{" [truncated 5602 chars]
	I0813 00:05:58.140755  826514 pod_ready.go:92] pod "kube-proxy-8h4t8" in "kube-system" namespace has status "Ready":"True"
	I0813 00:05:58.140774  826514 pod_ready.go:81] duration metric: took 6.37897864s waiting for pod "kube-proxy-8h4t8" in "kube-system" namespace to be "Ready" ...
	I0813 00:05:58.140783  826514 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tvtvh" in "kube-system" namespace to be "Ready" ...
	I0813 00:05:58.140847  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tvtvh
	I0813 00:05:58.140859  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:58.140864  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:58.140868  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:58.142647  826514 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:05:58.142663  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:58.142669  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:58.142674  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:58.142678  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:58.142683  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:58.142691  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:58 GMT
	I0813 00:05:58.142981  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tvtvh","generateName":"kube-proxy-","namespace":"kube-system","uid":"7b54ef6d-43f6-4dc1-b3be-c5fb1b57a108","resourceVersion":"476","creationTimestamp":"2021-08-13T00:05:06Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e61f546c-a1b1-4412-af08-7c2ebe78d772","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e61f546c-a1b1-4412-af08-7c2ebe78d772\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 5760 chars]
	I0813 00:05:58.143355  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289
	I0813 00:05:58.143372  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:58.143380  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:58.143386  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:58.146307  826514 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:05:58.146320  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:58.146325  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:58.146328  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:58.146331  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:58.146334  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:58.146337  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:58 GMT
	I0813 00:05:58.147204  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","resourceVersion":"378","creationTimestamp":"2021-08-13T00:04:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813000359-820289","minikube.k8s.io/updated_at":"2021_08_13T00_04_54_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6557 chars]
	I0813 00:05:58.147476  826514 pod_ready.go:92] pod "kube-proxy-tvtvh" in "kube-system" namespace has status "Ready":"True"
	I0813 00:05:58.147485  826514 pod_ready.go:81] duration metric: took 6.694603ms waiting for pod "kube-proxy-tvtvh" in "kube-system" namespace to be "Ready" ...
	I0813 00:05:58.147494  826514 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-20210813000359-820289" in "kube-system" namespace to be "Ready" ...
	I0813 00:05:58.147552  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210813000359-820289
	I0813 00:05:58.147560  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:58.147566  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:58.147572  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:58.150664  826514 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 00:05:58.150677  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:58.150682  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:58.150686  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:58.150691  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:58.150696  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:58 GMT
	I0813 00:05:58.150700  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:58.151607  826514 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20210813000359-820289","namespace":"kube-system","uid":"f92e79ae-a806-4356-8c4f-e58f5355dac5","resourceVersion":"328","creationTimestamp":"2021-08-13T00:04:59Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"113dba97bad3e83d4c789adae2059392","kubernetes.io/config.mirror":"113dba97bad3e83d4c789adae2059392","kubernetes.io/config.seen":"2021-08-13T00:04:59.185608489Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:05:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:
kubernetes.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:la [truncated 4543 chars]
	I0813 00:05:58.151916  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes/multinode-20210813000359-820289
	I0813 00:05:58.151933  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:58.151939  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:58.151945  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:58.154488  826514 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:05:58.154502  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:58.154506  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:58.154509  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:58.154512  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:58.154516  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:58 GMT
	I0813 00:05:58.154519  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:58.156157  826514 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","resourceVersion":"378","creationTimestamp":"2021-08-13T00:04:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813000359-820289","minikube.k8s.io/updated_at":"2021_08_13T00_04_54_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6557 chars]
	I0813 00:05:58.156407  826514 pod_ready.go:92] pod "kube-scheduler-multinode-20210813000359-820289" in "kube-system" namespace has status "Ready":"True"
	I0813 00:05:58.156420  826514 pod_ready.go:81] duration metric: took 8.919354ms waiting for pod "kube-scheduler-multinode-20210813000359-820289" in "kube-system" namespace to be "Ready" ...
	I0813 00:05:58.156430  826514 pod_ready.go:38] duration metric: took 6.425891571s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 00:05:58.156456  826514 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 00:05:58.156503  826514 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 00:05:58.168297  826514 system_svc.go:56] duration metric: took 11.837498ms WaitForService to wait for kubelet.
	I0813 00:05:58.168315  826514 kubeadm.go:547] duration metric: took 6.456243842s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 00:05:58.168332  826514 node_conditions.go:102] verifying NodePressure condition ...
	I0813 00:05:58.168378  826514 round_trippers.go:432] GET https://192.168.39.22:8443/api/v1/nodes
	I0813 00:05:58.168387  826514 round_trippers.go:438] Request Headers:
	I0813 00:05:58.168391  826514 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:05:58.168395  826514 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:05:58.172806  826514 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0813 00:05:58.172819  826514 round_trippers.go:460] Response Headers:
	I0813 00:05:58.172825  826514 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:05:58.172830  826514 round_trippers.go:463]     Content-Type: application/json
	I0813 00:05:58.172834  826514 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b97487a3-c03c-4d10-a908-9973fc6ef53a
	I0813 00:05:58.172839  826514 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cfc090db-ee2e-4526-a8bb-86b3a4442b6e
	I0813 00:05:58.172844  826514 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:05:58 GMT
	I0813 00:05:58.173613  826514 request.go:1123] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"572"},"items":[{"metadata":{"name":"multinode-20210813000359-820289","uid":"1218a901-3522-4975-a93b-e3332d7abf84","resourceVersion":"378","creationTimestamp":"2021-08-13T00:04:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813000359-820289","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813000359-820289","minikube.k8s.io/updated_at":"2021_08_13T00_04_54_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-mana
ged-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","opera [truncated 13204 chars]
	I0813 00:05:58.173957  826514 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 00:05:58.173977  826514 node_conditions.go:123] node cpu capacity is 2
	I0813 00:05:58.173993  826514 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 00:05:58.174000  826514 node_conditions.go:123] node cpu capacity is 2
	I0813 00:05:58.174012  826514 node_conditions.go:105] duration metric: took 5.670044ms to run NodePressure ...
	I0813 00:05:58.174025  826514 start.go:231] waiting for startup goroutines ...
	I0813 00:05:58.217590  826514 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0813 00:05:58.220259  826514 out.go:177] * Done! kubectl is now configured to use "multinode-20210813000359-820289" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Fri 2021-08-13 00:04:10 UTC, end at Fri 2021-08-13 00:06:44 UTC. --
	Aug 13 00:06:43 multinode-20210813000359-820289 crio[2072]: time="2021-08-13 00:06:43.661683392Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bb24f7c3733accb9bf83243432a7b1dfe71c6ba332858cb895ee4b90e164c7e,PodSandboxId:8ebe136452482398b02ed3194e73d7f3b47c2ebe0c8636a1a5ea7c3bf727fd52,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,State:CONTAINER_RUNNING,CreatedAt:1628813162681483299,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-84b6686758-gpb9d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2d74316-9ad3-435c-85a3-19e862cd06d2,},Annotations:map[string]string{io.kubernetes.container.hash: c2687e87,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d93d4cc078f93deae8a572b1cfdba5fb94f5fee55d54e5f8a31839e13954a0a9,PodSandboxId:87950fe11799e2db85a712b095c5a589e895487c070a8985dade001cc54d69d3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1,State:CONTAINER_RUNNING,CreatedAt:1628813110965706133,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rzxjz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 650bf88e-f784-45f9-8943-257e984acedb,},Annotations:map[string]string{io.kubernetes.container.hash: d48443f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30246750ae65a222bbebc82aec9e04ccd550ff645f3c13528cdd844fc3506ed9,PodSandboxId:ebf76d4c0e9109a98390ea67a365790c31f2b939162be891950db40068893dd2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628813110411451912,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9999a063-d32c-4253-8af3-7c28fdc3c692,},Annotations:map[string]string{io.kubernetes.container.hash: 504fcb50,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef6034d32143b6615e6876df647ca585cddd90a8020c7be214bf70a4392fc14f,PodSandboxId:df2707b50f79d6820754a6b0ce62caf95885b3dc8baa43d012d4da33484856c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628813109426681117,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-sstrb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16f6c77d-26a2-47e7-9c19-74736961cc13,},Annotations:map[string]string{io.kubernetes.container.hash: de0b46f5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:197270e3714f8da106e4432573059d334400188532dad024e8659b6ca68d3950,PodSandboxId:8bea8dc2306da2e31aa4367a6dc4dafa83e1f89129127b9ce0be1002b821ca45,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628813107623828740,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tvtvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b54ef6d-43f6-4dc1-b3be-c5fb
1b57a108,},Annotations:map[string]string{io.kubernetes.container.hash: 2b7f9cc8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:147ad965e8ea44e04be64c13be2c8fc9afa937d704566b412ecfcb2b1e079d95,PodSandboxId:5292cf223a2a94890741821b0cdf167047413c472b5b1094e36fc4e9938c133b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628813085373769587,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 113dba97bad3e83d4c789adae2
059392,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15143b6bceb4a8214e28179fabcae7dd97bf707e8479bff0016061130ed21a6b,PodSandboxId:9801a58b23d0586bb52d693945d09168ddbceabbd92383ddac6da7af798ee977,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628813085209894855,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc4647bddd439c8d0983a3b358a72513,},Annotations:map[string]string{io.k
ubernetes.container.hash: 2b8db17f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd7113723a04b479e05376effd00d05266a436fc3ad70a9db4d776e661744766,PodSandboxId:4a5904acbb4c46b25b53d6d1e794ab382871dec2032c217ee2bd4aad27dc7e34,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628813085052726011,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e7d0bc335d72432dc6bd22d4541dfbd,},
Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe1711400a92c91e4a417968e5aa7d64e9b5f216105c1ce6f378be5ba2438982,PodSandboxId:586605455cab03d2216a4dc4956feebd0ba82f7335f25f9df737fdfa8afd1cbf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628813084927482208,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0dcc263218298eb0bc9dd91ad6c2c6d,},Anno
tations:map[string]string{io.kubernetes.container.hash: f554a0df,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=dd95fee3-e44b-468f-af50-d1cf44c8c061 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:06:43 multinode-20210813000359-820289 crio[2072]: time="2021-08-13 00:06:43.943983858Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a5fc2159-cac1-4654-99e8-24810cc343ed name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:06:43 multinode-20210813000359-820289 crio[2072]: time="2021-08-13 00:06:43.944134619Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a5fc2159-cac1-4654-99e8-24810cc343ed name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:06:43 multinode-20210813000359-820289 crio[2072]: time="2021-08-13 00:06:43.944419094Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bb24f7c3733accb9bf83243432a7b1dfe71c6ba332858cb895ee4b90e164c7e,PodSandboxId:8ebe136452482398b02ed3194e73d7f3b47c2ebe0c8636a1a5ea7c3bf727fd52,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,State:CONTAINER_RUNNING,CreatedAt:1628813162681483299,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-84b6686758-gpb9d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2d74316-9ad3-435c-85a3-19e862cd06d2,},Annotations:map[string]string{io.kubernetes.container.hash: c2687e87,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d93d4cc078f93deae8a572b1cfdba5fb94f5fee55d54e5f8a31839e13954a0a9,PodSandboxId:87950fe11799e2db85a712b095c5a589e895487c070a8985dade001cc54d69d3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1,State:CONTAINER_RUNNING,CreatedAt:1628813110965706133,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rzxjz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 650bf88e-f784-45f9-8943-257e984acedb,},Annotations:map[string]string{io.kubernetes.container.hash: d48443f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30246750ae65a222bbebc82aec9e04ccd550ff645f3c13528cdd844fc3506ed9,PodSandboxId:ebf76d4c0e9109a98390ea67a365790c31f2b939162be891950db40068893dd2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628813110411451912,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9999a063-d32c-4253-8af3-7c28fdc3c692,},Annotations:map[string]string{io.kubernetes.container.hash: 504fcb50,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef6034d32143b6615e6876df647ca585cddd90a8020c7be214bf70a4392fc14f,PodSandboxId:df2707b50f79d6820754a6b0ce62caf95885b3dc8baa43d012d4da33484856c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628813109426681117,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-sstrb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16f6c77d-26a2-47e7-9c19-74736961cc13,},Annotations:map[string]string{io.kubernetes.container.hash: de0b46f5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:197270e3714f8da106e4432573059d334400188532dad024e8659b6ca68d3950,PodSandboxId:8bea8dc2306da2e31aa4367a6dc4dafa83e1f89129127b9ce0be1002b821ca45,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628813107623828740,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tvtvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b54ef6d-43f6-4dc1-b3be-c5fb
1b57a108,},Annotations:map[string]string{io.kubernetes.container.hash: 2b7f9cc8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:147ad965e8ea44e04be64c13be2c8fc9afa937d704566b412ecfcb2b1e079d95,PodSandboxId:5292cf223a2a94890741821b0cdf167047413c472b5b1094e36fc4e9938c133b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628813085373769587,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 113dba97bad3e83d4c789adae2
059392,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15143b6bceb4a8214e28179fabcae7dd97bf707e8479bff0016061130ed21a6b,PodSandboxId:9801a58b23d0586bb52d693945d09168ddbceabbd92383ddac6da7af798ee977,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628813085209894855,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc4647bddd439c8d0983a3b358a72513,},Annotations:map[string]string{io.k
ubernetes.container.hash: 2b8db17f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd7113723a04b479e05376effd00d05266a436fc3ad70a9db4d776e661744766,PodSandboxId:4a5904acbb4c46b25b53d6d1e794ab382871dec2032c217ee2bd4aad27dc7e34,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628813085052726011,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e7d0bc335d72432dc6bd22d4541dfbd,},
Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe1711400a92c91e4a417968e5aa7d64e9b5f216105c1ce6f378be5ba2438982,PodSandboxId:586605455cab03d2216a4dc4956feebd0ba82f7335f25f9df737fdfa8afd1cbf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628813084927482208,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0dcc263218298eb0bc9dd91ad6c2c6d,},Anno
tations:map[string]string{io.kubernetes.container.hash: f554a0df,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a5fc2159-cac1-4654-99e8-24810cc343ed name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:06:43 multinode-20210813000359-820289 crio[2072]: time="2021-08-13 00:06:43.984972648Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a6f71fa3-d711-4890-86e8-fba8346e841a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:06:43 multinode-20210813000359-820289 crio[2072]: time="2021-08-13 00:06:43.985116300Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a6f71fa3-d711-4890-86e8-fba8346e841a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:06:43 multinode-20210813000359-820289 crio[2072]: time="2021-08-13 00:06:43.985409667Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bb24f7c3733accb9bf83243432a7b1dfe71c6ba332858cb895ee4b90e164c7e,PodSandboxId:8ebe136452482398b02ed3194e73d7f3b47c2ebe0c8636a1a5ea7c3bf727fd52,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,State:CONTAINER_RUNNING,CreatedAt:1628813162681483299,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-84b6686758-gpb9d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2d74316-9ad3-435c-85a3-19e862cd06d2,},Annotations:map[string]string{io.kubernetes.container.hash: c2687e87,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d93d4cc078f93deae8a572b1cfdba5fb94f5fee55d54e5f8a31839e13954a0a9,PodSandboxId:87950fe11799e2db85a712b095c5a589e895487c070a8985dade001cc54d69d3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1,State:CONTAINER_RUNNING,CreatedAt:1628813110965706133,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rzxjz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 650bf88e-f784-45f9-8943-257e984acedb,},Annotations:map[string]string{io.kubernetes.container.hash: d48443f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30246750ae65a222bbebc82aec9e04ccd550ff645f3c13528cdd844fc3506ed9,PodSandboxId:ebf76d4c0e9109a98390ea67a365790c31f2b939162be891950db40068893dd2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628813110411451912,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9999a063-d32c-4253-8af3-7c28fdc3c692,},Annotations:map[string]string{io.kubernetes.container.hash: 504fcb50,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef6034d32143b6615e6876df647ca585cddd90a8020c7be214bf70a4392fc14f,PodSandboxId:df2707b50f79d6820754a6b0ce62caf95885b3dc8baa43d012d4da33484856c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628813109426681117,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-sstrb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16f6c77d-26a2-47e7-9c19-74736961cc13,},Annotations:map[string]string{io.kubernetes.container.hash: de0b46f5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:197270e3714f8da106e4432573059d334400188532dad024e8659b6ca68d3950,PodSandboxId:8bea8dc2306da2e31aa4367a6dc4dafa83e1f89129127b9ce0be1002b821ca45,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628813107623828740,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tvtvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b54ef6d-43f6-4dc1-b3be-c5fb
1b57a108,},Annotations:map[string]string{io.kubernetes.container.hash: 2b7f9cc8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:147ad965e8ea44e04be64c13be2c8fc9afa937d704566b412ecfcb2b1e079d95,PodSandboxId:5292cf223a2a94890741821b0cdf167047413c472b5b1094e36fc4e9938c133b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628813085373769587,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 113dba97bad3e83d4c789adae2
059392,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15143b6bceb4a8214e28179fabcae7dd97bf707e8479bff0016061130ed21a6b,PodSandboxId:9801a58b23d0586bb52d693945d09168ddbceabbd92383ddac6da7af798ee977,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628813085209894855,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc4647bddd439c8d0983a3b358a72513,},Annotations:map[string]string{io.k
ubernetes.container.hash: 2b8db17f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd7113723a04b479e05376effd00d05266a436fc3ad70a9db4d776e661744766,PodSandboxId:4a5904acbb4c46b25b53d6d1e794ab382871dec2032c217ee2bd4aad27dc7e34,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628813085052726011,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e7d0bc335d72432dc6bd22d4541dfbd,},
Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe1711400a92c91e4a417968e5aa7d64e9b5f216105c1ce6f378be5ba2438982,PodSandboxId:586605455cab03d2216a4dc4956feebd0ba82f7335f25f9df737fdfa8afd1cbf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628813084927482208,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0dcc263218298eb0bc9dd91ad6c2c6d,},Anno
tations:map[string]string{io.kubernetes.container.hash: f554a0df,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a6f71fa3-d711-4890-86e8-fba8346e841a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:06:44 multinode-20210813000359-820289 crio[2072]: time="2021-08-13 00:06:44.036093963Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0e269494-54c2-42ec-923c-bcdbf553bb62 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:06:44 multinode-20210813000359-820289 crio[2072]: time="2021-08-13 00:06:44.036394706Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0e269494-54c2-42ec-923c-bcdbf553bb62 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:06:44 multinode-20210813000359-820289 crio[2072]: time="2021-08-13 00:06:44.036961390Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bb24f7c3733accb9bf83243432a7b1dfe71c6ba332858cb895ee4b90e164c7e,PodSandboxId:8ebe136452482398b02ed3194e73d7f3b47c2ebe0c8636a1a5ea7c3bf727fd52,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,State:CONTAINER_RUNNING,CreatedAt:1628813162681483299,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-84b6686758-gpb9d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2d74316-9ad3-435c-85a3-19e862cd06d2,},Annotations:map[string]string{io.kubernetes.container.hash: c2687e87,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d93d4cc078f93deae8a572b1cfdba5fb94f5fee55d54e5f8a31839e13954a0a9,PodSandboxId:87950fe11799e2db85a712b095c5a589e895487c070a8985dade001cc54d69d3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1,State:CONTAINER_RUNNING,CreatedAt:1628813110965706133,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rzxjz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 650bf88e-f784-45f9-8943-257e984acedb,},Annotations:map[string]string{io.kubernetes.container.hash: d48443f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30246750ae65a222bbebc82aec9e04ccd550ff645f3c13528cdd844fc3506ed9,PodSandboxId:ebf76d4c0e9109a98390ea67a365790c31f2b939162be891950db40068893dd2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628813110411451912,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9999a063-d32c-4253-8af3-7c28fdc3c692,},Annotations:map[string]string{io.kubernetes.container.hash: 504fcb50,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef6034d32143b6615e6876df647ca585cddd90a8020c7be214bf70a4392fc14f,PodSandboxId:df2707b50f79d6820754a6b0ce62caf95885b3dc8baa43d012d4da33484856c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628813109426681117,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-sstrb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16f6c77d-26a2-47e7-9c19-74736961cc13,},Annotations:map[string]string{io.kubernetes.container.hash: de0b46f5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:197270e3714f8da106e4432573059d334400188532dad024e8659b6ca68d3950,PodSandboxId:8bea8dc2306da2e31aa4367a6dc4dafa83e1f89129127b9ce0be1002b821ca45,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628813107623828740,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tvtvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b54ef6d-43f6-4dc1-b3be-c5fb
1b57a108,},Annotations:map[string]string{io.kubernetes.container.hash: 2b7f9cc8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:147ad965e8ea44e04be64c13be2c8fc9afa937d704566b412ecfcb2b1e079d95,PodSandboxId:5292cf223a2a94890741821b0cdf167047413c472b5b1094e36fc4e9938c133b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628813085373769587,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 113dba97bad3e83d4c789adae2
059392,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15143b6bceb4a8214e28179fabcae7dd97bf707e8479bff0016061130ed21a6b,PodSandboxId:9801a58b23d0586bb52d693945d09168ddbceabbd92383ddac6da7af798ee977,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628813085209894855,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc4647bddd439c8d0983a3b358a72513,},Annotations:map[string]string{io.k
ubernetes.container.hash: 2b8db17f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd7113723a04b479e05376effd00d05266a436fc3ad70a9db4d776e661744766,PodSandboxId:4a5904acbb4c46b25b53d6d1e794ab382871dec2032c217ee2bd4aad27dc7e34,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628813085052726011,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e7d0bc335d72432dc6bd22d4541dfbd,},
Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe1711400a92c91e4a417968e5aa7d64e9b5f216105c1ce6f378be5ba2438982,PodSandboxId:586605455cab03d2216a4dc4956feebd0ba82f7335f25f9df737fdfa8afd1cbf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628813084927482208,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0dcc263218298eb0bc9dd91ad6c2c6d,},Anno
tations:map[string]string{io.kubernetes.container.hash: f554a0df,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0e269494-54c2-42ec-923c-bcdbf553bb62 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:06:44 multinode-20210813000359-820289 crio[2072]: time="2021-08-13 00:06:44.079612034Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=dae7a08a-e6dc-4f9c-b518-9598d13a87ec name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:06:44 multinode-20210813000359-820289 crio[2072]: time="2021-08-13 00:06:44.079670779Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=dae7a08a-e6dc-4f9c-b518-9598d13a87ec name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:06:44 multinode-20210813000359-820289 crio[2072]: time="2021-08-13 00:06:44.079877875Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bb24f7c3733accb9bf83243432a7b1dfe71c6ba332858cb895ee4b90e164c7e,PodSandboxId:8ebe136452482398b02ed3194e73d7f3b47c2ebe0c8636a1a5ea7c3bf727fd52,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,State:CONTAINER_RUNNING,CreatedAt:1628813162681483299,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-84b6686758-gpb9d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2d74316-9ad3-435c-85a3-19e862cd06d2,},Annotations:map[string]string{io.kubernetes.container.hash: c2687e87,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d93d4cc078f93deae8a572b1cfdba5fb94f5fee55d54e5f8a31839e13954a0a9,PodSandboxId:87950fe11799e2db85a712b095c5a589e895487c070a8985dade001cc54d69d3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1,State:CONTAINER_RUNNING,CreatedAt:1628813110965706133,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rzxjz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 650bf88e-f784-45f9-8943-257e984acedb,},Annotations:map[string]string{io.kubernetes.container.hash: d48443f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30246750ae65a222bbebc82aec9e04ccd550ff645f3c13528cdd844fc3506ed9,PodSandboxId:ebf76d4c0e9109a98390ea67a365790c31f2b939162be891950db40068893dd2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628813110411451912,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9999a063-d32c-4253-8af3-7c28fdc3c692,},Annotations:map[string]string{io.kubernetes.container.hash: 504fcb50,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef6034d32143b6615e6876df647ca585cddd90a8020c7be214bf70a4392fc14f,PodSandboxId:df2707b50f79d6820754a6b0ce62caf95885b3dc8baa43d012d4da33484856c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628813109426681117,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-sstrb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16f6c77d-26a2-47e7-9c19-74736961cc13,},Annotations:map[string]string{io.kubernetes.container.hash: de0b46f5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:197270e3714f8da106e4432573059d334400188532dad024e8659b6ca68d3950,PodSandboxId:8bea8dc2306da2e31aa4367a6dc4dafa83e1f89129127b9ce0be1002b821ca45,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628813107623828740,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tvtvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b54ef6d-43f6-4dc1-b3be-c5fb
1b57a108,},Annotations:map[string]string{io.kubernetes.container.hash: 2b7f9cc8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:147ad965e8ea44e04be64c13be2c8fc9afa937d704566b412ecfcb2b1e079d95,PodSandboxId:5292cf223a2a94890741821b0cdf167047413c472b5b1094e36fc4e9938c133b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628813085373769587,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 113dba97bad3e83d4c789adae2
059392,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15143b6bceb4a8214e28179fabcae7dd97bf707e8479bff0016061130ed21a6b,PodSandboxId:9801a58b23d0586bb52d693945d09168ddbceabbd92383ddac6da7af798ee977,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628813085209894855,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc4647bddd439c8d0983a3b358a72513,},Annotations:map[string]string{io.k
ubernetes.container.hash: 2b8db17f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd7113723a04b479e05376effd00d05266a436fc3ad70a9db4d776e661744766,PodSandboxId:4a5904acbb4c46b25b53d6d1e794ab382871dec2032c217ee2bd4aad27dc7e34,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628813085052726011,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e7d0bc335d72432dc6bd22d4541dfbd,},
Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe1711400a92c91e4a417968e5aa7d64e9b5f216105c1ce6f378be5ba2438982,PodSandboxId:586605455cab03d2216a4dc4956feebd0ba82f7335f25f9df737fdfa8afd1cbf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628813084927482208,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0dcc263218298eb0bc9dd91ad6c2c6d,},Anno
tations:map[string]string{io.kubernetes.container.hash: f554a0df,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=dae7a08a-e6dc-4f9c-b518-9598d13a87ec name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:06:44 multinode-20210813000359-820289 crio[2072]: time="2021-08-13 00:06:44.121654926Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=aeda3d55-f21e-4b7a-8dc5-08822d92f1bc name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:06:44 multinode-20210813000359-820289 crio[2072]: time="2021-08-13 00:06:44.121957569Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=aeda3d55-f21e-4b7a-8dc5-08822d92f1bc name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:06:44 multinode-20210813000359-820289 crio[2072]: time="2021-08-13 00:06:44.122896745Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bb24f7c3733accb9bf83243432a7b1dfe71c6ba332858cb895ee4b90e164c7e,PodSandboxId:8ebe136452482398b02ed3194e73d7f3b47c2ebe0c8636a1a5ea7c3bf727fd52,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,State:CONTAINER_RUNNING,CreatedAt:1628813162681483299,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-84b6686758-gpb9d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2d74316-9ad3-435c-85a3-19e862cd06d2,},Annotations:map[string]string{io.kubernetes.container.hash: c2687e87,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d93d4cc078f93deae8a572b1cfdba5fb94f5fee55d54e5f8a31839e13954a0a9,PodSandboxId:87950fe11799e2db85a712b095c5a589e895487c070a8985dade001cc54d69d3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1,State:CONTAINER_RUNNING,CreatedAt:1628813110965706133,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rzxjz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 650bf88e-f784-45f9-8943-257e984acedb,},Annotations:map[string]string{io.kubernetes.container.hash: d48443f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30246750ae65a222bbebc82aec9e04ccd550ff645f3c13528cdd844fc3506ed9,PodSandboxId:ebf76d4c0e9109a98390ea67a365790c31f2b939162be891950db40068893dd2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628813110411451912,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9999a063-d32c-4253-8af3-7c28fdc3c692,},Annotations:map[string]string{io.kubernetes.container.hash: 504fcb50,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef6034d32143b6615e6876df647ca585cddd90a8020c7be214bf70a4392fc14f,PodSandboxId:df2707b50f79d6820754a6b0ce62caf95885b3dc8baa43d012d4da33484856c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628813109426681117,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-sstrb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16f6c77d-26a2-47e7-9c19-74736961cc13,},Annotations:map[string]string{io.kubernetes.container.hash: de0b46f5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:197270e3714f8da106e4432573059d334400188532dad024e8659b6ca68d3950,PodSandboxId:8bea8dc2306da2e31aa4367a6dc4dafa83e1f89129127b9ce0be1002b821ca45,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628813107623828740,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tvtvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b54ef6d-43f6-4dc1-b3be-c5fb
1b57a108,},Annotations:map[string]string{io.kubernetes.container.hash: 2b7f9cc8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:147ad965e8ea44e04be64c13be2c8fc9afa937d704566b412ecfcb2b1e079d95,PodSandboxId:5292cf223a2a94890741821b0cdf167047413c472b5b1094e36fc4e9938c133b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628813085373769587,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 113dba97bad3e83d4c789adae2
059392,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15143b6bceb4a8214e28179fabcae7dd97bf707e8479bff0016061130ed21a6b,PodSandboxId:9801a58b23d0586bb52d693945d09168ddbceabbd92383ddac6da7af798ee977,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628813085209894855,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc4647bddd439c8d0983a3b358a72513,},Annotations:map[string]string{io.k
ubernetes.container.hash: 2b8db17f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd7113723a04b479e05376effd00d05266a436fc3ad70a9db4d776e661744766,PodSandboxId:4a5904acbb4c46b25b53d6d1e794ab382871dec2032c217ee2bd4aad27dc7e34,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628813085052726011,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e7d0bc335d72432dc6bd22d4541dfbd,},
Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe1711400a92c91e4a417968e5aa7d64e9b5f216105c1ce6f378be5ba2438982,PodSandboxId:586605455cab03d2216a4dc4956feebd0ba82f7335f25f9df737fdfa8afd1cbf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628813084927482208,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0dcc263218298eb0bc9dd91ad6c2c6d,},Anno
tations:map[string]string{io.kubernetes.container.hash: f554a0df,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=aeda3d55-f21e-4b7a-8dc5-08822d92f1bc name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:06:44 multinode-20210813000359-820289 crio[2072]: time="2021-08-13 00:06:44.164457337Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b7587cc5-bc58-47e8-be22-c1bf558f5620 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:06:44 multinode-20210813000359-820289 crio[2072]: time="2021-08-13 00:06:44.164518522Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b7587cc5-bc58-47e8-be22-c1bf558f5620 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:06:44 multinode-20210813000359-820289 crio[2072]: time="2021-08-13 00:06:44.164747360Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bb24f7c3733accb9bf83243432a7b1dfe71c6ba332858cb895ee4b90e164c7e,PodSandboxId:8ebe136452482398b02ed3194e73d7f3b47c2ebe0c8636a1a5ea7c3bf727fd52,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,State:CONTAINER_RUNNING,CreatedAt:1628813162681483299,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-84b6686758-gpb9d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2d74316-9ad3-435c-85a3-19e862cd06d2,},Annotations:map[string]string{io.kubernetes.container.hash: c2687e87,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d93d4cc078f93deae8a572b1cfdba5fb94f5fee55d54e5f8a31839e13954a0a9,PodSandboxId:87950fe11799e2db85a712b095c5a589e895487c070a8985dade001cc54d69d3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1,State:CONTAINER_RUNNING,CreatedAt:1628813110965706133,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rzxjz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 650bf88e-f784-45f9-8943-257e984acedb,},Annotations:map[string]string{io.kubernetes.container.hash: d48443f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30246750ae65a222bbebc82aec9e04ccd550ff645f3c13528cdd844fc3506ed9,PodSandboxId:ebf76d4c0e9109a98390ea67a365790c31f2b939162be891950db40068893dd2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628813110411451912,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9999a063-d32c-4253-8af3-7c28fdc3c692,},Annotations:map[string]string{io.kubernetes.container.hash: 504fcb50,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef6034d32143b6615e6876df647ca585cddd90a8020c7be214bf70a4392fc14f,PodSandboxId:df2707b50f79d6820754a6b0ce62caf95885b3dc8baa43d012d4da33484856c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628813109426681117,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-sstrb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16f6c77d-26a2-47e7-9c19-74736961cc13,},Annotations:map[string]string{io.kubernetes.container.hash: de0b46f5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:197270e3714f8da106e4432573059d334400188532dad024e8659b6ca68d3950,PodSandboxId:8bea8dc2306da2e31aa4367a6dc4dafa83e1f89129127b9ce0be1002b821ca45,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628813107623828740,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tvtvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b54ef6d-43f6-4dc1-b3be-c5fb
1b57a108,},Annotations:map[string]string{io.kubernetes.container.hash: 2b7f9cc8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:147ad965e8ea44e04be64c13be2c8fc9afa937d704566b412ecfcb2b1e079d95,PodSandboxId:5292cf223a2a94890741821b0cdf167047413c472b5b1094e36fc4e9938c133b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628813085373769587,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 113dba97bad3e83d4c789adae2
059392,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15143b6bceb4a8214e28179fabcae7dd97bf707e8479bff0016061130ed21a6b,PodSandboxId:9801a58b23d0586bb52d693945d09168ddbceabbd92383ddac6da7af798ee977,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628813085209894855,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc4647bddd439c8d0983a3b358a72513,},Annotations:map[string]string{io.k
ubernetes.container.hash: 2b8db17f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd7113723a04b479e05376effd00d05266a436fc3ad70a9db4d776e661744766,PodSandboxId:4a5904acbb4c46b25b53d6d1e794ab382871dec2032c217ee2bd4aad27dc7e34,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628813085052726011,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e7d0bc335d72432dc6bd22d4541dfbd,},
Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe1711400a92c91e4a417968e5aa7d64e9b5f216105c1ce6f378be5ba2438982,PodSandboxId:586605455cab03d2216a4dc4956feebd0ba82f7335f25f9df737fdfa8afd1cbf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628813084927482208,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0dcc263218298eb0bc9dd91ad6c2c6d,},Anno
tations:map[string]string{io.kubernetes.container.hash: f554a0df,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b7587cc5-bc58-47e8-be22-c1bf558f5620 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:06:44 multinode-20210813000359-820289 crio[2072]: time="2021-08-13 00:06:44.197827926Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7510f002-5511-47a7-ab52-52925d386a61 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:06:44 multinode-20210813000359-820289 crio[2072]: time="2021-08-13 00:06:44.197875237Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7510f002-5511-47a7-ab52-52925d386a61 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:06:44 multinode-20210813000359-820289 crio[2072]: time="2021-08-13 00:06:44.198046943Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bb24f7c3733accb9bf83243432a7b1dfe71c6ba332858cb895ee4b90e164c7e,PodSandboxId:8ebe136452482398b02ed3194e73d7f3b47c2ebe0c8636a1a5ea7c3bf727fd52,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,State:CONTAINER_RUNNING,CreatedAt:1628813162681483299,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-84b6686758-gpb9d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2d74316-9ad3-435c-85a3-19e862cd06d2,},Annotations:map[string]string{io.kubernetes.container.hash: c2687e87,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d93d4cc078f93deae8a572b1cfdba5fb94f5fee55d54e5f8a31839e13954a0a9,PodSandboxId:87950fe11799e2db85a712b095c5a589e895487c070a8985dade001cc54d69d3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1,State:CONTAINER_RUNNING,CreatedAt:1628813110965706133,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rzxjz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 650bf88e-f784-45f9-8943-257e984acedb,},Annotations:map[string]string{io.kubernetes.container.hash: d48443f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30246750ae65a222bbebc82aec9e04ccd550ff645f3c13528cdd844fc3506ed9,PodSandboxId:ebf76d4c0e9109a98390ea67a365790c31f2b939162be891950db40068893dd2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628813110411451912,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9999a063-d32c-4253-8af3-7c28fdc3c692,},Annotations:map[string]string{io.kubernetes.container.hash: 504fcb50,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef6034d32143b6615e6876df647ca585cddd90a8020c7be214bf70a4392fc14f,PodSandboxId:df2707b50f79d6820754a6b0ce62caf95885b3dc8baa43d012d4da33484856c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628813109426681117,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-sstrb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16f6c77d-26a2-47e7-9c19-74736961cc13,},Annotations:map[string]string{io.kubernetes.container.hash: de0b46f5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:197270e3714f8da106e4432573059d334400188532dad024e8659b6ca68d3950,PodSandboxId:8bea8dc2306da2e31aa4367a6dc4dafa83e1f89129127b9ce0be1002b821ca45,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628813107623828740,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tvtvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b54ef6d-43f6-4dc1-b3be-c5fb
1b57a108,},Annotations:map[string]string{io.kubernetes.container.hash: 2b7f9cc8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:147ad965e8ea44e04be64c13be2c8fc9afa937d704566b412ecfcb2b1e079d95,PodSandboxId:5292cf223a2a94890741821b0cdf167047413c472b5b1094e36fc4e9938c133b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628813085373769587,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 113dba97bad3e83d4c789adae2
059392,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15143b6bceb4a8214e28179fabcae7dd97bf707e8479bff0016061130ed21a6b,PodSandboxId:9801a58b23d0586bb52d693945d09168ddbceabbd92383ddac6da7af798ee977,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628813085209894855,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc4647bddd439c8d0983a3b358a72513,},Annotations:map[string]string{io.k
ubernetes.container.hash: 2b8db17f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd7113723a04b479e05376effd00d05266a436fc3ad70a9db4d776e661744766,PodSandboxId:4a5904acbb4c46b25b53d6d1e794ab382871dec2032c217ee2bd4aad27dc7e34,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628813085052726011,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e7d0bc335d72432dc6bd22d4541dfbd,},
Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe1711400a92c91e4a417968e5aa7d64e9b5f216105c1ce6f378be5ba2438982,PodSandboxId:586605455cab03d2216a4dc4956feebd0ba82f7335f25f9df737fdfa8afd1cbf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628813084927482208,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0dcc263218298eb0bc9dd91ad6c2c6d,},Anno
tations:map[string]string{io.kubernetes.container.hash: f554a0df,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7510f002-5511-47a7-ab52-52925d386a61 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:06:44 multinode-20210813000359-820289 crio[2072]: time="2021-08-13 00:06:44.236731925Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9001ec0b-fbde-43b8-ad70-655177dc15f6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:06:44 multinode-20210813000359-820289 crio[2072]: time="2021-08-13 00:06:44.236783403Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9001ec0b-fbde-43b8-ad70-655177dc15f6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:06:44 multinode-20210813000359-820289 crio[2072]: time="2021-08-13 00:06:44.236958791Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bb24f7c3733accb9bf83243432a7b1dfe71c6ba332858cb895ee4b90e164c7e,PodSandboxId:8ebe136452482398b02ed3194e73d7f3b47c2ebe0c8636a1a5ea7c3bf727fd52,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47,State:CONTAINER_RUNNING,CreatedAt:1628813162681483299,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-84b6686758-gpb9d,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2d74316-9ad3-435c-85a3-19e862cd06d2,},Annotations:map[string]string{io.kubernetes.container.hash: c2687e87,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d93d4cc078f93deae8a572b1cfdba5fb94f5fee55d54e5f8a31839e13954a0a9,PodSandboxId:87950fe11799e2db85a712b095c5a589e895487c070a8985dade001cc54d69d3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1,State:CONTAINER_RUNNING,CreatedAt:1628813110965706133,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rzxjz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 650bf88e-f784-45f9-8943-257e984acedb,},Annotations:map[string]string{io.kubernetes.container.hash: d48443f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30246750ae65a222bbebc82aec9e04ccd550ff645f3c13528cdd844fc3506ed9,PodSandboxId:ebf76d4c0e9109a98390ea67a365790c31f2b939162be891950db40068893dd2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1628813110411451912,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9999a063-d32c-4253-8af3-7c28fdc3c692,},Annotations:map[string]string{io.kubernetes.container.hash: 504fcb50,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef6034d32143b6615e6876df647ca585cddd90a8020c7be214bf70a4392fc14f,PodSandboxId:df2707b50f79d6820754a6b0ce62caf95885b3dc8baa43d012d4da33484856c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61,State:CONTAINER_RUNNING,CreatedAt:1628813109426681117,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-558bd4d5db-sstrb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16f6c77d-26a2-47e7-9c19-74736961cc13,},Annotations:map[string]string{io.kubernetes.container.hash: de0b46f5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:197270e3714f8da106e4432573059d334400188532dad024e8659b6ca68d3950,PodSandboxId:8bea8dc2306da2e31aa4367a6dc4dafa83e1f89129127b9ce0be1002b821ca45,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b,State:CONTAINER_RUNNING,CreatedAt:1628813107623828740,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tvtvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b54ef6d-43f6-4dc1-b3be-c5fb
1b57a108,},Annotations:map[string]string{io.kubernetes.container.hash: 2b7f9cc8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:147ad965e8ea44e04be64c13be2c8fc9afa937d704566b412ecfcb2b1e079d95,PodSandboxId:5292cf223a2a94890741821b0cdf167047413c472b5b1094e36fc4e9938c133b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4,State:CONTAINER_RUNNING,CreatedAt:1628813085373769587,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 113dba97bad3e83d4c789adae2
059392,},Annotations:map[string]string{io.kubernetes.container.hash: bde20ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15143b6bceb4a8214e28179fabcae7dd97bf707e8479bff0016061130ed21a6b,PodSandboxId:9801a58b23d0586bb52d693945d09168ddbceabbd92383ddac6da7af798ee977,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2,State:CONTAINER_RUNNING,CreatedAt:1628813085209894855,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc4647bddd439c8d0983a3b358a72513,},Annotations:map[string]string{io.k
ubernetes.container.hash: 2b8db17f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd7113723a04b479e05376effd00d05266a436fc3ad70a9db4d776e661744766,PodSandboxId:4a5904acbb4c46b25b53d6d1e794ab382871dec2032c217ee2bd4aad27dc7e34,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b,State:CONTAINER_RUNNING,CreatedAt:1628813085052726011,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e7d0bc335d72432dc6bd22d4541dfbd,},
Annotations:map[string]string{io.kubernetes.container.hash: dfe11a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe1711400a92c91e4a417968e5aa7d64e9b5f216105c1ce6f378be5ba2438982,PodSandboxId:586605455cab03d2216a4dc4956feebd0ba82f7335f25f9df737fdfa8afd1cbf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2,State:CONTAINER_RUNNING,CreatedAt:1628813084927482208,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-20210813000359-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0dcc263218298eb0bc9dd91ad6c2c6d,},Anno
tations:map[string]string{io.kubernetes.container.hash: f554a0df,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9001ec0b-fbde-43b8-ad70-655177dc15f6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                               CREATED              STATE               NAME                      ATTEMPT             POD ID
	2bb24f7c3733a       docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47   41 seconds ago       Running             busybox                   0                   8ebe136452482
	d93d4cc078f93       6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb                                    About a minute ago   Running             kindnet-cni               0                   87950fe11799e
	30246750ae65a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                    About a minute ago   Running             storage-provisioner       0                   ebf76d4c0e910
	ef6034d32143b       296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899                                    About a minute ago   Running             coredns                   0                   df2707b50f79d
	197270e3714f8       adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92                                    About a minute ago   Running             kube-proxy                0                   8bea8dc2306da
	147ad965e8ea4       6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a                                    About a minute ago   Running             kube-scheduler            0                   5292cf223a2a9
	15143b6bceb4a       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934                                    About a minute ago   Running             etcd                      0                   9801a58b23d05
	cd7113723a04b       bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9                                    About a minute ago   Running             kube-controller-manager   0                   4a5904acbb4c4
	fe1711400a92c       3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80                                    About a minute ago   Running             kube-apiserver            0                   586605455cab0
	
	* 
	* ==> coredns [ef6034d32143b6615e6876df647ca585cddd90a8020c7be214bf70a4392fc14f] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = 8f51b271a18f2ce6fcaee5f1cfda3ed0
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-20210813000359-820289
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-20210813000359-820289
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=dc1c3ca26e9449ce488a773126b8450402c94a19
	                    minikube.k8s.io/name=multinode-20210813000359-820289
	                    minikube.k8s.io/updated_at=2021_08_13T00_04_54_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 00:04:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-20210813000359-820289
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Aug 2021 00:06:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 00:06:29 +0000   Fri, 13 Aug 2021 00:04:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 00:06:29 +0000   Fri, 13 Aug 2021 00:04:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 00:06:29 +0000   Fri, 13 Aug 2021 00:04:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Aug 2021 00:06:29 +0000   Fri, 13 Aug 2021 00:05:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.22
	  Hostname:    multinode-20210813000359-820289
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2186320Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2186320Ki
	  pods:               110
	System Info:
	  Machine ID:                 d38d599800bb433e9cf69a669c4ea971
	  System UUID:                d38d5998-00bb-433e-9cf6-9a669c4ea971
	  Boot ID:                    1695864a-b84f-4769-a5d2-70e036721d1a
	  Kernel Version:             4.19.182
	  OS Image:                   Buildroot 2020.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.20.2
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-84b6686758-gpb9d                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         45s
	  kube-system                 coredns-558bd4d5db-sstrb                                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (7%!)(MISSING)     98s
	  kube-system                 etcd-multinode-20210813000359-820289                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         105s
	  kube-system                 kindnet-rzxjz                                              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      98s
	  kube-system                 kube-apiserver-multinode-20210813000359-820289             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         105s
	  kube-system                 kube-controller-manager-multinode-20210813000359-820289    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         111s
	  kube-system                 kube-proxy-tvtvh                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  kube-system                 kube-scheduler-multinode-20210813000359-820289             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         105s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From        Message
	  ----    ------                   ----  ----        -------
	  Normal  Starting                 105s  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  105s  kubelet     Node multinode-20210813000359-820289 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    105s  kubelet     Node multinode-20210813000359-820289 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     105s  kubelet     Node multinode-20210813000359-820289 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  105s  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                99s   kubelet     Node multinode-20210813000359-820289 status is now: NodeReady
	  Normal  Starting                 96s   kube-proxy  Starting kube-proxy.
	
	
	Name:               multinode-20210813000359-820289-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-20210813000359-820289-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 00:05:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-20210813000359-820289-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Aug 2021 00:06:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 00:06:21 +0000   Fri, 13 Aug 2021 00:05:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 00:06:21 +0000   Fri, 13 Aug 2021 00:05:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 00:06:21 +0000   Fri, 13 Aug 2021 00:05:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Aug 2021 00:06:21 +0000   Fri, 13 Aug 2021 00:05:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.152
	  Hostname:    multinode-20210813000359-820289-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2186320Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2186320Ki
	  pods:               110
	System Info:
	  Machine ID:                 781605be65944ef8b9544ba2115c5324
	  System UUID:                781605be-6594-4ef8-b954-4ba2115c5324
	  Boot ID:                    28e74cce-06b7-45b4-8a51-dfcbd3cde66e
	  Kernel Version:             4.19.182
	  OS Image:                   Buildroot 2020.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.20.2
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-84b6686758-p6fb8    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         45s
	  kube-system                 kindnet-dpckf               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      54s
	  kube-system                 kube-proxy-8h4t8            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 54s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x2 over 54s)  kubelet     Node multinode-20210813000359-820289-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x2 over 54s)  kubelet     Node multinode-20210813000359-820289-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x2 over 54s)  kubelet     Node multinode-20210813000359-820289-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  54s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                53s                kubelet     Node multinode-20210813000359-820289-m02 status is now: NodeReady
	  Normal  Starting                 47s                kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Aug13 00:04] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.093803] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +3.721935] Unstable clock detected, switching default tracing clock to "global"
	              If you want to keep using the local clock, then add:
	                "trace_clock=local"
	              on the kernel command line
	[  +0.000025] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.172040] systemd-fstab-generator[1160]: Ignoring "noauto" for root device
	[  +0.033747] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +0.931532] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1719 comm=systemd-network
	[  +1.025329] vboxguest: loading out-of-tree module taints kernel.
	[  +0.005870] vboxguest: PCI device not found, probably running on physical hardware.
	[  +1.431390] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[ +14.205447] systemd-fstab-generator[2161]: Ignoring "noauto" for root device
	[  +0.127782] systemd-fstab-generator[2174]: Ignoring "noauto" for root device
	[  +0.189875] systemd-fstab-generator[2202]: Ignoring "noauto" for root device
	[  +8.599155] systemd-fstab-generator[2405]: Ignoring "noauto" for root device
	[ +17.366209] systemd-fstab-generator[2821]: Ignoring "noauto" for root device
	[Aug13 00:05] kauditd_printk_skb: 38 callbacks suppressed
	[Aug13 00:06] NFSD: Unable to end grace period: -110
	
	* 
	* ==> etcd [15143b6bceb4a8214e28179fabcae7dd97bf707e8479bff0016061130ed21a6b] <==
	* 2021-08-13 00:04:50.671588 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-scheduler-multinode-20210813000359-820289\" " with result "range_response_count:0 size:4" took too long (236.74127ms) to execute
	2021-08-13 00:04:50.671819 W | etcdserver: read-only range request "key:\"/registry/csinodes/multinode-20210813000359-820289\" " with result "range_response_count:0 size:4" took too long (237.052743ms) to execute
	2021-08-13 00:04:59.705476 W | etcdserver: read-only range request "key:\"/registry/minions/multinode-20210813000359-820289\" " with result "range_response_count:1 size:5602" took too long (219.610465ms) to execute
	2021-08-13 00:04:59.707292 W | etcdserver: read-only range request "key:\"/registry/csinodes/multinode-20210813000359-820289\" " with result "range_response_count:1 size:668" took too long (220.7375ms) to execute
	2021-08-13 00:04:59.707570 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (120.536562ms) to execute
	2021-08-13 00:05:01.781354 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 00:05:08.548115 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 00:05:18.546875 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 00:05:28.546344 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 00:05:38.547071 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 00:05:42.324408 W | wal: sync duration of 1.259616242s, expected less than 1s
	2021-08-13 00:05:42.357370 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:1 size:6118" took too long (499.452475ms) to execute
	2021-08-13 00:05:42.357549 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (649.015558ms) to execute
	2021-08-13 00:05:43.465450 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:1129" took too long (390.915031ms) to execute
	2021-08-13 00:05:43.465757 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:421" took too long (1.026973983s) to execute
	2021-08-13 00:05:43.465907 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (767.15358ms) to execute
	2021-08-13 00:05:48.547140 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 00:05:50.863435 W | etcdserver: read-only range request "key:\"/registry/csinodes/multinode-20210813000359-820289-m02\" " with result "range_response_count:0 size:5" took too long (173.058336ms) to execute
	2021-08-13 00:05:50.880773 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (179.92153ms) to execute
	2021-08-13 00:05:55.812376 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (114.544574ms) to execute
	2021-08-13 00:05:58.546455 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 00:06:08.547405 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 00:06:18.547549 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 00:06:28.546990 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 00:06:38.546240 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  00:06:44 up 2 min,  0 users,  load average: 1.12, 0.57, 0.22
	Linux multinode-20210813000359-820289 4.19.182 #1 SMP Fri Aug 6 09:11:32 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [fe1711400a92c91e4a417968e5aa7d64e9b5f216105c1ce6f378be5ba2438982] <==
	* I0813 00:04:52.963820       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0813 00:04:53.987647       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0813 00:04:54.063314       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0813 00:04:59.716831       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0813 00:05:06.372626       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0813 00:05:06.576938       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0813 00:05:19.544423       1 client.go:360] parsed scheme: "passthrough"
	I0813 00:05:19.544503       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 00:05:19.544527       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 00:05:42.358309       1 trace.go:205] Trace[301258205]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (13-Aug-2021 00:05:41.857) (total time: 500ms):
	Trace[301258205]: [500.998587ms] [500.998587ms] END
	I0813 00:05:42.358800       1 trace.go:205] Trace[445959692]: "List" url:/api/v1/nodes,user-agent:kindnetd/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.39.22,accept:application/json, */*,protocol:HTTP/2.0 (13-Aug-2021 00:05:41.857) (total time: 501ms):
	Trace[445959692]: ---"Listing from storage done" 501ms (00:05:00.358)
	Trace[445959692]: [501.596542ms] [501.596542ms] END
	I0813 00:05:43.468671       1 trace.go:205] Trace[1190083982]: "Get" url:/api/v1/namespaces/default/endpoints/kubernetes,user-agent:kube-apiserver/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (13-Aug-2021 00:05:42.438) (total time: 1029ms):
	Trace[1190083982]: ---"About to write a response" 1028ms (00:05:00.466)
	Trace[1190083982]: [1.029915279s] [1.029915279s] END
	I0813 00:06:01.288323       1 client.go:360] parsed scheme: "passthrough"
	I0813 00:06:01.288386       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 00:06:01.288401       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	E0813 00:06:19.762255       1 upgradeaware.go:401] Error proxying data from backend to client: write tcp 192.168.39.22:8443->192.168.39.1:48532: write: connection reset by peer
	E0813 00:06:32.849367       1 upgradeaware.go:387] Error proxying data from client to backend: read tcp 192.168.39.22:8443->192.168.39.1:48574: read: connection reset by peer
	I0813 00:06:34.746344       1 client.go:360] parsed scheme: "passthrough"
	I0813 00:06:34.746479       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 00:06:34.746521       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [cd7113723a04b479e05376effd00d05266a436fc3ad70a9db4d776e661744766] <==
	* I0813 00:05:06.026258       1 shared_informer.go:247] Caches are synced for crt configmap 
	I0813 00:05:06.102306       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0813 00:05:06.390232       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-tvtvh"
	I0813 00:05:06.399387       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-rzxjz"
	E0813 00:05:06.446097       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"e61f546c-a1b1-4412-af08-7c2ebe78d772", ResourceVersion:"288", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764409894, loc:(*time.Location)(0x72ff440)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc00035abd0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00035ac00)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.
LabelSelector)(0xc0013a2460), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Gl
usterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0013b83c0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00035a
c30), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeS
ource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00035ac60), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil),
AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.21.3", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0013a24a0)}}, Resources:v1.R
esourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001601da0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPo
licy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000fe19d8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000379b20), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runtime
ClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000f97900)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000fe1a28)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	E0813 00:05:06.453032       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"c1f4f3ef-f446-4a5a-9116-cc17ab8a2d14", ResourceVersion:"305", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764409894, loc:(*time.Location)(0x72ff440)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"kindest/kindnetd:v20210326-1e038dc5\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists
\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc00035ac90), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00035acc0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0013a2520), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"ki
ndnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00035acf0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FC
VolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00035ad20), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolume
Source)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00035ad50), EmptyDir:(*v1.Emp
tyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVo
lume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"kindest/kindnetd:v20210326-1e038dc5", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0013a2540)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0013a2580)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDe
cAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Li
fecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001601e00), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000fe1c48), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000379c00), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.Hos
tAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000f97950)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000fe1c90)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0813 00:05:06.499954       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 00:05:06.543290       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 00:05:06.543386       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0813 00:05:06.580142       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-558bd4d5db to 2"
	I0813 00:05:06.716895       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0813 00:05:06.827982       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-sstrb"
	I0813 00:05:06.859250       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-rgwt6"
	I0813 00:05:06.928919       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-rgwt6"
	W0813 00:05:50.866621       1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-20210813000359-820289-m02" does not exist
	W0813 00:05:50.873151       1 node_lifecycle_controller.go:1013] Missing timestamp for Node multinode-20210813000359-820289-m02. Assuming now as a timestamp.
	I0813 00:05:50.873727       1 event.go:291] "Event occurred" object="multinode-20210813000359-820289-m02" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-20210813000359-820289-m02 event: Registered Node multinode-20210813000359-820289-m02 in Controller"
	I0813 00:05:50.904798       1 range_allocator.go:373] Set node multinode-20210813000359-820289-m02 PodCIDR to [10.244.1.0/24]
	I0813 00:05:50.935579       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-8h4t8"
	I0813 00:05:50.943336       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-dpckf"
	I0813 00:05:59.274555       1 event.go:291] "Event occurred" object="default/busybox" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-84b6686758 to 2"
	I0813 00:05:59.291911       1 event.go:291] "Event occurred" object="default/busybox-84b6686758" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-84b6686758-p6fb8"
	I0813 00:05:59.320696       1 event.go:291] "Event occurred" object="default/busybox-84b6686758" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-84b6686758-gpb9d"
	
	* 
	* ==> kube-proxy [197270e3714f8da106e4432573059d334400188532dad024e8659b6ca68d3950] <==
	* I0813 00:05:08.157823       1 node.go:172] Successfully retrieved node IP: 192.168.39.22
	I0813 00:05:08.157940       1 server_others.go:140] Detected node IP 192.168.39.22
	W0813 00:05:08.158041       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	W0813 00:05:08.238694       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0813 00:05:08.238728       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0813 00:05:08.238744       1 server_others.go:212] Using iptables Proxier.
	I0813 00:05:08.239501       1 server.go:643] Version: v1.21.3
	I0813 00:05:08.241057       1 config.go:315] Starting service config controller
	I0813 00:05:08.241070       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0813 00:05:08.241104       1 config.go:224] Starting endpoint slice config controller
	I0813 00:05:08.241108       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0813 00:05:08.259799       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0813 00:05:08.265334       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 00:05:08.341440       1 shared_informer.go:247] Caches are synced for service config 
	I0813 00:05:08.341450       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [147ad965e8ea44e04be64c13be2c8fc9afa937d704566b412ecfcb2b1e079d95] <==
	* I0813 00:04:50.453581       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0813 00:04:50.453621       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0813 00:04:50.455372       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 00:04:50.458065       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 00:04:50.458587       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 00:04:50.458677       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 00:04:50.460389       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 00:04:50.460476       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 00:04:50.460546       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 00:04:50.460694       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 00:04:50.460745       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 00:04:50.460796       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 00:04:50.460838       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 00:04:50.460886       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 00:04:50.460933       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 00:04:50.460972       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 00:04:51.346695       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 00:04:51.413250       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 00:04:51.524533       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 00:04:51.593056       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 00:04:51.618330       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 00:04:51.619558       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 00:04:51.813783       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 00:04:51.816826       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0813 00:04:54.254249       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 00:04:10 UTC, end at Fri 2021-08-13 00:06:44 UTC. --
	Aug 13 00:05:06 multinode-20210813000359-820289 kubelet[2829]: I0813 00:05:06.439716    2829 topology_manager.go:187] "Topology Admit Handler"
	Aug 13 00:05:06 multinode-20210813000359-820289 kubelet[2829]: I0813 00:05:06.442619    2829 topology_manager.go:187] "Topology Admit Handler"
	Aug 13 00:05:06 multinode-20210813000359-820289 kubelet[2829]: W0813 00:05:06.521418    2829 watcher.go:95] Error while processing event ("/sys/fs/cgroup/devices/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b54ef6d_43f6_4dc1_b3be_c5fb1b57a108.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b54ef6d_43f6_4dc1_b3be_c5fb1b57a108.slice: no such file or directory
	Aug 13 00:05:06 multinode-20210813000359-820289 kubelet[2829]: I0813 00:05:06.551308    2829 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7b54ef6d-43f6-4dc1-b3be-c5fb1b57a108-kube-proxy\") pod \"kube-proxy-tvtvh\" (UID: \"7b54ef6d-43f6-4dc1-b3be-c5fb1b57a108\") "
	Aug 13 00:05:06 multinode-20210813000359-820289 kubelet[2829]: I0813 00:05:06.552323    2829 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7b54ef6d-43f6-4dc1-b3be-c5fb1b57a108-lib-modules\") pod \"kube-proxy-tvtvh\" (UID: \"7b54ef6d-43f6-4dc1-b3be-c5fb1b57a108\") "
	Aug 13 00:05:06 multinode-20210813000359-820289 kubelet[2829]: I0813 00:05:06.552816    2829 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/650bf88e-f784-45f9-8943-257e984acedb-xtables-lock\") pod \"kindnet-rzxjz\" (UID: \"650bf88e-f784-45f9-8943-257e984acedb\") "
	Aug 13 00:05:06 multinode-20210813000359-820289 kubelet[2829]: I0813 00:05:06.553052    2829 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvrmd\" (UniqueName: \"kubernetes.io/projected/7b54ef6d-43f6-4dc1-b3be-c5fb1b57a108-kube-api-access-fvrmd\") pod \"kube-proxy-tvtvh\" (UID: \"7b54ef6d-43f6-4dc1-b3be-c5fb1b57a108\") "
	Aug 13 00:05:06 multinode-20210813000359-820289 kubelet[2829]: I0813 00:05:06.553390    2829 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7b54ef6d-43f6-4dc1-b3be-c5fb1b57a108-xtables-lock\") pod \"kube-proxy-tvtvh\" (UID: \"7b54ef6d-43f6-4dc1-b3be-c5fb1b57a108\") "
	Aug 13 00:05:06 multinode-20210813000359-820289 kubelet[2829]: I0813 00:05:06.553607    2829 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/650bf88e-f784-45f9-8943-257e984acedb-cni-cfg\") pod \"kindnet-rzxjz\" (UID: \"650bf88e-f784-45f9-8943-257e984acedb\") "
	Aug 13 00:05:06 multinode-20210813000359-820289 kubelet[2829]: I0813 00:05:06.553826    2829 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/650bf88e-f784-45f9-8943-257e984acedb-lib-modules\") pod \"kindnet-rzxjz\" (UID: \"650bf88e-f784-45f9-8943-257e984acedb\") "
	Aug 13 00:05:06 multinode-20210813000359-820289 kubelet[2829]: I0813 00:05:06.554041    2829 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4677\" (UniqueName: \"kubernetes.io/projected/650bf88e-f784-45f9-8943-257e984acedb-kube-api-access-v4677\") pod \"kindnet-rzxjz\" (UID: \"650bf88e-f784-45f9-8943-257e984acedb\") "
	Aug 13 00:05:06 multinode-20210813000359-820289 kubelet[2829]: E0813 00:05:06.669835    2829 projected.go:293] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Aug 13 00:05:06 multinode-20210813000359-820289 kubelet[2829]: E0813 00:05:06.670054    2829 projected.go:199] Error preparing data for projected volume kube-api-access-v4677 for pod kube-system/kindnet-rzxjz: configmap "kube-root-ca.crt" not found
	Aug 13 00:05:06 multinode-20210813000359-820289 kubelet[2829]: E0813 00:05:06.670274    2829 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/projected/650bf88e-f784-45f9-8943-257e984acedb-kube-api-access-v4677 podName:650bf88e-f784-45f9-8943-257e984acedb nodeName:}" failed. No retries permitted until 2021-08-13 00:05:07.170247709 +0000 UTC m=+13.251773692 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"kube-api-access-v4677\" (UniqueName: \"kubernetes.io/projected/650bf88e-f784-45f9-8943-257e984acedb-kube-api-access-v4677\") pod \"kindnet-rzxjz\" (UID: \"650bf88e-f784-45f9-8943-257e984acedb\") : configmap \"kube-root-ca.crt\" not found"
	Aug 13 00:05:06 multinode-20210813000359-820289 kubelet[2829]: I0813 00:05:06.854604    2829 topology_manager.go:187] "Topology Admit Handler"
	Aug 13 00:05:06 multinode-20210813000359-820289 kubelet[2829]: I0813 00:05:06.863466    2829 topology_manager.go:187] "Topology Admit Handler"
	Aug 13 00:05:06 multinode-20210813000359-820289 kubelet[2829]: I0813 00:05:06.957159    2829 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16f6c77d-26a2-47e7-9c19-74736961cc13-config-volume\") pod \"coredns-558bd4d5db-sstrb\" (UID: \"16f6c77d-26a2-47e7-9c19-74736961cc13\") "
	Aug 13 00:05:06 multinode-20210813000359-820289 kubelet[2829]: I0813 00:05:06.957533    2829 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfx7l\" (UniqueName: \"kubernetes.io/projected/16f6c77d-26a2-47e7-9c19-74736961cc13-kube-api-access-vfx7l\") pod \"coredns-558bd4d5db-sstrb\" (UID: \"16f6c77d-26a2-47e7-9c19-74736961cc13\") "
	Aug 13 00:05:09 multinode-20210813000359-820289 kubelet[2829]: I0813 00:05:09.130357    2829 topology_manager.go:187] "Topology Admit Handler"
	Aug 13 00:05:09 multinode-20210813000359-820289 kubelet[2829]: I0813 00:05:09.208141    2829 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqw9b\" (UniqueName: \"kubernetes.io/projected/9999a063-d32c-4253-8af3-7c28fdc3c692-kube-api-access-wqw9b\") pod \"storage-provisioner\" (UID: \"9999a063-d32c-4253-8af3-7c28fdc3c692\") "
	Aug 13 00:05:09 multinode-20210813000359-820289 kubelet[2829]: I0813 00:05:09.209065    2829 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9999a063-d32c-4253-8af3-7c28fdc3c692-tmp\") pod \"storage-provisioner\" (UID: \"9999a063-d32c-4253-8af3-7c28fdc3c692\") "
	Aug 13 00:05:10 multinode-20210813000359-820289 kubelet[2829]: E0813 00:05:10.052730    2829 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/650bf88e-f784-45f9-8943-257e984acedb/etc-hosts with error exit status 1" pod="kube-system/kindnet-rzxjz"
	Aug 13 00:05:59 multinode-20210813000359-820289 kubelet[2829]: I0813 00:05:59.334816    2829 topology_manager.go:187] "Topology Admit Handler"
	Aug 13 00:05:59 multinode-20210813000359-820289 kubelet[2829]: I0813 00:05:59.421506    2829 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd2sm\" (UniqueName: \"kubernetes.io/projected/b2d74316-9ad3-435c-85a3-19e862cd06d2-kube-api-access-bd2sm\") pod \"busybox-84b6686758-gpb9d\" (UID: \"b2d74316-9ad3-435c-85a3-19e862cd06d2\") "
	Aug 13 00:06:00 multinode-20210813000359-820289 kubelet[2829]: E0813 00:06:00.691463    2829 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/b2d74316-9ad3-435c-85a3-19e862cd06d2/etc-hosts with error exit status 1" pod="default/busybox-84b6686758-gpb9d"
	
	* 
	* ==> storage-provisioner [30246750ae65a222bbebc82aec9e04ccd550ff645f3c13528cdd844fc3506ed9] <==
	* I0813 00:05:10.721489       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 00:05:10.747816       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 00:05:10.748608       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0813 00:05:10.768795       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0813 00:05:10.771247       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-20210813000359-820289_2ba37811-4ee3-4290-accc-c4e910b69983!
	I0813 00:05:10.780915       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fbaba920-a52e-4a33-827b-b81a17ff6434", APIVersion:"v1", ResourceVersion:"488", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-20210813000359-820289_2ba37811-4ee3-4290-accc-c4e910b69983 became leader
	I0813 00:05:10.872930       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-20210813000359-820289_2ba37811-4ee3-4290-accc-c4e910b69983!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-20210813000359-820289 -n multinode-20210813000359-820289
helpers_test.go:262: (dbg) Run:  kubectl --context multinode-20210813000359-820289 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: 
helpers_test.go:273: ======> post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context multinode-20210813000359-820289 describe pod 
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context multinode-20210813000359-820289 describe pod : exit status 1 (47.266624ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context multinode-20210813000359-820289 describe pod : exit status 1
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (13.18s)

                                                
                                    
x
+
TestPreload (166.09s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:48: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20210813001622-820289 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.17.0
E0813 00:16:41.535247  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812235933-820289/client.crt: no such file or directory
preload_test.go:48: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20210813001622-820289 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.17.0: (1m53.862993794s)
preload_test.go:61: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20210813001622-820289 -- sudo crictl pull busybox
preload_test.go:61: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-20210813001622-820289 -- sudo crictl pull busybox: (2.285071214s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20210813001622-820289 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.17.3
E0813 00:18:50.748399  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235029-820289/client.crt: no such file or directory
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20210813001622-820289 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.17.3: (46.292050488s)
preload_test.go:80: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20210813001622-820289 -- sudo crictl image ls
preload_test.go:85: Expected to find busybox in output of `docker images`, instead got 
-- stdout --
	IMAGE               TAG                 IMAGE ID            SIZE

                                                
                                                
-- /stdout --
panic.go:613: *** TestPreload FAILED at 2021-08-13 00:19:05.256212803 +0000 UTC m=+1741.107736950
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-20210813001622-820289 -n test-preload-20210813001622-820289
helpers_test.go:245: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-20210813001622-820289 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p test-preload-20210813001622-820289 logs -n 25: (1.424557613s)
helpers_test.go:253: TestPreload logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------------|-------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                             Args                             |               Profile               |  User   | Version |          Start Time           |           End Time            |
	|---------|--------------------------------------------------------------|-------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| kubectl | -p                                                           | multinode-20210813000359-820289     | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:06:32 UTC | Fri, 13 Aug 2021 00:06:43 UTC |
	|         | multinode-20210813000359-820289                              |                                     |         |         |                               |                               |
	|         | -- exec                                                      |                                     |         |         |                               |                               |
	|         | busybox-84b6686758-p6fb8                                     |                                     |         |         |                               |                               |
	|         | -- sh -c nslookup                                            |                                     |         |         |                               |                               |
	|         | host.minikube.internal | awk                                 |                                     |         |         |                               |                               |
	|         | 'NR==5' | cut -d' ' -f3                                      |                                     |         |         |                               |                               |
	| -p      | multinode-20210813000359-820289                              | multinode-20210813000359-820289     | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:06:43 UTC | Fri, 13 Aug 2021 00:06:44 UTC |
	|         | logs -n 25                                                   |                                     |         |         |                               |                               |
	| node    | add -p                                                       | multinode-20210813000359-820289     | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:06:45 UTC | Fri, 13 Aug 2021 00:07:37 UTC |
	|         | multinode-20210813000359-820289                              |                                     |         |         |                               |                               |
	|         | -v 3 --alsologtostderr                                       |                                     |         |         |                               |                               |
	| profile | list --output json                                           | minikube                            | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:07:38 UTC | Fri, 13 Aug 2021 00:07:38 UTC |
	| -p      | multinode-20210813000359-820289                              | multinode-20210813000359-820289     | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:07:39 UTC | Fri, 13 Aug 2021 00:07:39 UTC |
	|         | cp testdata/cp-test.txt                                      |                                     |         |         |                               |                               |
	|         | /home/docker/cp-test.txt                                     |                                     |         |         |                               |                               |
	| -p      | multinode-20210813000359-820289                              | multinode-20210813000359-820289     | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:07:39 UTC | Fri, 13 Aug 2021 00:07:39 UTC |
	|         | ssh sudo cat                                                 |                                     |         |         |                               |                               |
	|         | /home/docker/cp-test.txt                                     |                                     |         |         |                               |                               |
	| -p      | multinode-20210813000359-820289 cp testdata/cp-test.txt      | multinode-20210813000359-820289     | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:07:39 UTC | Fri, 13 Aug 2021 00:07:39 UTC |
	|         | multinode-20210813000359-820289-m02:/home/docker/cp-test.txt |                                     |         |         |                               |                               |
	| -p      | multinode-20210813000359-820289                              | multinode-20210813000359-820289     | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:07:39 UTC | Fri, 13 Aug 2021 00:07:39 UTC |
	|         | ssh -n                                                       |                                     |         |         |                               |                               |
	|         | multinode-20210813000359-820289-m02                          |                                     |         |         |                               |                               |
	|         | sudo cat /home/docker/cp-test.txt                            |                                     |         |         |                               |                               |
	| -p      | multinode-20210813000359-820289 cp testdata/cp-test.txt      | multinode-20210813000359-820289     | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:07:39 UTC | Fri, 13 Aug 2021 00:07:40 UTC |
	|         | multinode-20210813000359-820289-m03:/home/docker/cp-test.txt |                                     |         |         |                               |                               |
	| -p      | multinode-20210813000359-820289                              | multinode-20210813000359-820289     | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:07:40 UTC | Fri, 13 Aug 2021 00:07:40 UTC |
	|         | ssh -n                                                       |                                     |         |         |                               |                               |
	|         | multinode-20210813000359-820289-m03                          |                                     |         |         |                               |                               |
	|         | sudo cat /home/docker/cp-test.txt                            |                                     |         |         |                               |                               |
	| -p      | multinode-20210813000359-820289                              | multinode-20210813000359-820289     | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:07:40 UTC | Fri, 13 Aug 2021 00:07:42 UTC |
	|         | node stop m03                                                |                                     |         |         |                               |                               |
	| -p      | multinode-20210813000359-820289                              | multinode-20210813000359-820289     | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:07:43 UTC | Fri, 13 Aug 2021 00:08:31 UTC |
	|         | node start m03                                               |                                     |         |         |                               |                               |
	|         | --alsologtostderr                                            |                                     |         |         |                               |                               |
	| stop    | -p                                                           | multinode-20210813000359-820289     | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:08:31 UTC | Fri, 13 Aug 2021 00:08:39 UTC |
	|         | multinode-20210813000359-820289                              |                                     |         |         |                               |                               |
	| start   | -p                                                           | multinode-20210813000359-820289     | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:08:39 UTC | Fri, 13 Aug 2021 00:11:28 UTC |
	|         | multinode-20210813000359-820289                              |                                     |         |         |                               |                               |
	|         | --wait=true -v=8                                             |                                     |         |         |                               |                               |
	|         | --alsologtostderr                                            |                                     |         |         |                               |                               |
	| -p      | multinode-20210813000359-820289                              | multinode-20210813000359-820289     | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:11:28 UTC | Fri, 13 Aug 2021 00:11:29 UTC |
	|         | node delete m03                                              |                                     |         |         |                               |                               |
	| -p      | multinode-20210813000359-820289                              | multinode-20210813000359-820289     | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:11:30 UTC | Fri, 13 Aug 2021 00:11:35 UTC |
	|         | stop                                                         |                                     |         |         |                               |                               |
	| start   | -p                                                           | multinode-20210813000359-820289     | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:11:35 UTC | Fri, 13 Aug 2021 00:13:30 UTC |
	|         | multinode-20210813000359-820289                              |                                     |         |         |                               |                               |
	|         | --wait=true -v=8                                             |                                     |         |         |                               |                               |
	|         | --alsologtostderr --driver=kvm2                              |                                     |         |         |                               |                               |
	|         |  --container-runtime=crio                                    |                                     |         |         |                               |                               |
	| start   | -p                                                           | multinode-20210813000359-820289-m03 | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:13:31 UTC | Fri, 13 Aug 2021 00:14:31 UTC |
	|         | multinode-20210813000359-820289-m03                          |                                     |         |         |                               |                               |
	|         | --driver=kvm2                                                |                                     |         |         |                               |                               |
	|         | --container-runtime=crio                                     |                                     |         |         |                               |                               |
	| delete  | -p                                                           | multinode-20210813000359-820289-m03 | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:14:32 UTC | Fri, 13 Aug 2021 00:14:32 UTC |
	|         | multinode-20210813000359-820289-m03                          |                                     |         |         |                               |                               |
	| -p      | multinode-20210813000359-820289                              | multinode-20210813000359-820289     | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:14:33 UTC | Fri, 13 Aug 2021 00:14:34 UTC |
	|         | logs -n 25                                                   |                                     |         |         |                               |                               |
	| delete  | -p                                                           | multinode-20210813000359-820289     | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:14:34 UTC | Fri, 13 Aug 2021 00:14:36 UTC |
	|         | multinode-20210813000359-820289                              |                                     |         |         |                               |                               |
	| start   | -p                                                           | test-preload-20210813001622-820289  | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:16:22 UTC | Fri, 13 Aug 2021 00:18:16 UTC |
	|         | test-preload-20210813001622-820289                           |                                     |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                              |                                     |         |         |                               |                               |
	|         | --wait=true --preload=false                                  |                                     |         |         |                               |                               |
	|         | --driver=kvm2                                                |                                     |         |         |                               |                               |
	|         | --container-runtime=crio                                     |                                     |         |         |                               |                               |
	|         | --kubernetes-version=v1.17.0                                 |                                     |         |         |                               |                               |
	| ssh     | -p                                                           | test-preload-20210813001622-820289  | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:18:16 UTC | Fri, 13 Aug 2021 00:18:18 UTC |
	|         | test-preload-20210813001622-820289                           |                                     |         |         |                               |                               |
	|         | -- sudo crictl pull busybox                                  |                                     |         |         |                               |                               |
	| start   | -p                                                           | test-preload-20210813001622-820289  | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:18:18 UTC | Fri, 13 Aug 2021 00:19:04 UTC |
	|         | test-preload-20210813001622-820289                           |                                     |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                              |                                     |         |         |                               |                               |
	|         | -v=1 --wait=true --driver=kvm2                               |                                     |         |         |                               |                               |
	|         |  --container-runtime=crio                                    |                                     |         |         |                               |                               |
	|         | --kubernetes-version=v1.17.3                                 |                                     |         |         |                               |                               |
	| ssh     | -p                                                           | test-preload-20210813001622-820289  | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:19:05 UTC | Fri, 13 Aug 2021 00:19:05 UTC |
	|         | test-preload-20210813001622-820289                           |                                     |         |         |                               |                               |
	|         | -- sudo crictl image ls                                      |                                     |         |         |                               |                               |
	|---------|--------------------------------------------------------------|-------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 00:18:18
	Running on machine: debian-jenkins-agent-14
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 00:18:18.779880  854283 out.go:298] Setting OutFile to fd 1 ...
	I0813 00:18:18.779984  854283 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 00:18:18.779995  854283 out.go:311] Setting ErrFile to fd 2...
	I0813 00:18:18.780000  854283 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 00:18:18.780110  854283 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	I0813 00:18:18.780350  854283 out.go:305] Setting JSON to false
	I0813 00:18:18.815970  854283 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":14462,"bootTime":1628799437,"procs":160,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 00:18:18.816034  854283 start.go:121] virtualization: kvm guest
	I0813 00:18:18.818503  854283 out.go:177] * [test-preload-20210813001622-820289] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 00:18:18.819960  854283 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0813 00:18:18.818652  854283 notify.go:169] Checking for updates...
	I0813 00:18:18.821430  854283 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 00:18:18.822814  854283 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	I0813 00:18:18.824199  854283 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 00:18:18.824945  854283 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 00:18:18.824999  854283 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 00:18:18.836000  854283 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40597
	I0813 00:18:18.836399  854283 main.go:130] libmachine: () Calling .GetVersion
	I0813 00:18:18.836924  854283 main.go:130] libmachine: Using API Version  1
	I0813 00:18:18.836945  854283 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 00:18:18.837335  854283 main.go:130] libmachine: () Calling .GetMachineName
	I0813 00:18:18.837531  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .DriverName
	I0813 00:18:18.839166  854283 out.go:177] * Kubernetes 1.21.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.21.3
	I0813 00:18:18.839208  854283 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 00:18:18.839543  854283 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 00:18:18.839581  854283 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 00:18:18.849911  854283 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34341
	I0813 00:18:18.850333  854283 main.go:130] libmachine: () Calling .GetVersion
	I0813 00:18:18.850919  854283 main.go:130] libmachine: Using API Version  1
	I0813 00:18:18.850943  854283 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 00:18:18.851277  854283 main.go:130] libmachine: () Calling .GetMachineName
	I0813 00:18:18.851450  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .DriverName
	I0813 00:18:18.879917  854283 out.go:177] * Using the kvm2 driver based on existing profile
	I0813 00:18:18.879941  854283 start.go:278] selected driver: kvm2
	I0813 00:18:18.879946  854283 start.go:751] validating driver "kvm2" against &{Name:test-preload-20210813001622-820289 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12122/minikube-v1.22.0-1628238775-12122.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.0 C
lusterName:test-preload-20210813001622-820289 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.8 Port:8443 KubernetesVersion:v1.17.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 00:18:18.880062  854283 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0813 00:18:18.881093  854283 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 00:18:18.881214  854283 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0813 00:18:18.891429  854283 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0813 00:18:18.891753  854283 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 00:18:18.891787  854283 cni.go:93] Creating CNI manager for ""
	I0813 00:18:18.891795  854283 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 00:18:18.891824  854283 start_flags.go:277] config:
	{Name:test-preload-20210813001622-820289 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12122/minikube-v1.22.0-1628238775-12122.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.3 ClusterName:test-preload-20210813001622-820289 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.8 Port:8443 KubernetesVersion:v1.17.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 00:18:18.891932  854283 iso.go:123] acquiring lock: {Name:mk52748db467e5aa4b344902ee09c9ea40467a67 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 00:18:18.893741  854283 out.go:177] * Starting control plane node test-preload-20210813001622-820289 in cluster test-preload-20210813001622-820289
	I0813 00:18:18.893761  854283 preload.go:131] Checking if preload exists for k8s version v1.17.3 and runtime crio
	W0813 00:18:18.917951  854283 preload.go:114] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.17.3-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0813 00:18:18.918157  854283 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/test-preload-20210813001622-820289/config.json ...
	I0813 00:18:18.918201  854283 cache.go:108] acquiring lock: {Name:mk38a66c138a8bc93fca526181a592966be7b0c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 00:18:18.918196  854283 cache.go:108] acquiring lock: {Name:mke68568f672459a827b0f040c3707ca1a385ade Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 00:18:18.918239  854283 cache.go:108] acquiring lock: {Name:mk890c4812a3f64c01016df6b109e10decacca0c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 00:18:18.918305  854283 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 exists
	I0813 00:18:18.918330  854283 cache.go:97] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.4" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4" took 140.23µs
	I0813 00:18:18.918350  854283 cache.go:81] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.4 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 succeeded
	I0813 00:18:18.918335  854283 cache.go:108] acquiring lock: {Name:mkec237ea5123288a66db6ae3e1a55b1613c3427 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 00:18:18.918345  854283 cache.go:108] acquiring lock: {Name:mk58d8c6e0cdee1b874acbbfc9bfd22c507b5660 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 00:18:18.918377  854283 image.go:133] retrieving image: k8s.gcr.io/kube-apiserver:v1.17.3
	I0813 00:18:18.918212  854283 cache.go:108] acquiring lock: {Name:mk27d786d8fb664541b8cc69619779994a04b509 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 00:18:18.918378  854283 cache.go:108] acquiring lock: {Name:mk12ef9a95276dd16390bff56e76634bab8dee70 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 00:18:18.918388  854283 cache.go:108] acquiring lock: {Name:mke7fea6e92979b79e213cf80e068498c1416a2d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 00:18:18.918341  854283 cache.go:108] acquiring lock: {Name:mk20e6e36da8dc63ac3481834d3b46a6cfffeec1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 00:18:18.918383  854283 cache.go:205] Successfully downloaded all kic artifacts
	I0813 00:18:18.918509  854283 image.go:133] retrieving image: k8s.gcr.io/kube-controller-manager:v1.17.3
	I0813 00:18:18.918523  854283 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0813 00:18:18.918535  854283 start.go:313] acquiring machines lock for test-preload-20210813001622-820289: {Name:mk2d46e46728943fc604570595bb7616469b4e8e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 00:18:18.918550  854283 cache.go:97] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 173.602µs
	I0813 00:18:18.918566  854283 cache.go:81] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0813 00:18:18.918532  854283 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 exists
	I0813 00:18:18.918590  854283 start.go:317] acquired machines lock for "test-preload-20210813001622-820289" in 43.192µs
	I0813 00:18:18.918593  854283 image.go:133] retrieving image: k8s.gcr.io/kube-scheduler:v1.17.3
	I0813 00:18:18.918583  854283 cache.go:97] cache image "k8s.gcr.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0" took 293.368µs
	I0813 00:18:18.918605  854283 image.go:133] retrieving image: k8s.gcr.io/kube-proxy:v1.17.3
	I0813 00:18:18.918608  854283 cache.go:81] save to tar file k8s.gcr.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 succeeded
	I0813 00:18:18.918607  854283 start.go:93] Skipping create...Using existing machine configuration
	I0813 00:18:18.918664  854283 fix.go:55] fixHost starting: 
	I0813 00:18:18.918495  854283 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 exists
	I0813 00:18:18.918709  854283 cache.go:97] cache image "docker.io/kubernetesui/dashboard:v2.1.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0" took 321.577µs
	I0813 00:18:18.918725  854283 cache.go:81] save to tar file docker.io/kubernetesui/dashboard:v2.1.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 succeeded
	I0813 00:18:18.918868  854283 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/pause_3.1 exists
	I0813 00:18:18.918888  854283 cache.go:97] cache image "k8s.gcr.io/pause:3.1" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/pause_3.1" took 701.213µs
	I0813 00:18:18.918875  854283 cache.go:108] acquiring lock: {Name:mk5361624250299d1e1cd777a581e4a0e0a61cbc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 00:18:18.918902  854283 cache.go:81] save to tar file k8s.gcr.io/pause:3.1 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/pause_3.1 succeeded
	I0813 00:18:18.918976  854283 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/coredns_1.6.5 exists
	I0813 00:18:18.919003  854283 cache.go:97] cache image "k8s.gcr.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/coredns_1.6.5" took 148.973µs
	I0813 00:18:18.919014  854283 cache.go:81] save to tar file k8s.gcr.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/coredns_1.6.5 succeeded
	I0813 00:18:18.919066  854283 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 00:18:18.919108  854283 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 00:18:18.919433  854283 image.go:175] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.17.3: Error response from daemon: reference does not exist
	I0813 00:18:18.919545  854283 image.go:175] daemon lookup for k8s.gcr.io/kube-proxy:v1.17.3: Error response from daemon: reference does not exist
	I0813 00:18:18.919584  854283 image.go:175] daemon lookup for k8s.gcr.io/kube-apiserver:v1.17.3: Error response from daemon: reference does not exist
	I0813 00:18:18.919586  854283 image.go:175] daemon lookup for k8s.gcr.io/kube-scheduler:v1.17.3: Error response from daemon: reference does not exist
	I0813 00:18:18.929481  854283 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:39477
	I0813 00:18:18.929908  854283 main.go:130] libmachine: () Calling .GetVersion
	I0813 00:18:18.930478  854283 main.go:130] libmachine: Using API Version  1
	I0813 00:18:18.930497  854283 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 00:18:18.930864  854283 main.go:130] libmachine: () Calling .GetMachineName
	I0813 00:18:18.931065  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .DriverName
	I0813 00:18:18.931241  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .GetState
	I0813 00:18:18.933771  854283 fix.go:108] recreateIfNeeded on test-preload-20210813001622-820289: state=Running err=<nil>
	W0813 00:18:18.933788  854283 fix.go:134] unexpected machine state, will restart: <nil>
	I0813 00:18:18.936621  854283 out.go:177] * Updating the running kvm2 "test-preload-20210813001622-820289" VM ...
	I0813 00:18:18.936656  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .DriverName
	I0813 00:18:18.936830  854283 machine.go:88] provisioning docker machine ...
	I0813 00:18:18.936851  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .DriverName
	I0813 00:18:18.936993  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .GetMachineName
	I0813 00:18:18.937115  854283 buildroot.go:166] provisioning hostname "test-preload-20210813001622-820289"
	I0813 00:18:18.937134  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .GetMachineName
	I0813 00:18:18.937260  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .GetSSHHostname
	I0813 00:18:18.941829  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) DBG | domain test-preload-20210813001622-820289 has defined MAC address 52:54:00:fc:fa:f1 in network mk-test-preload-20210813001622-820289
	I0813 00:18:18.942163  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:fa:f1", ip: ""} in network mk-test-preload-20210813001622-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:16:40 +0000 UTC Type:0 Mac:52:54:00:fc:fa:f1 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:test-preload-20210813001622-820289 Clientid:01:52:54:00:fc:fa:f1}
	I0813 00:18:18.942191  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) DBG | domain test-preload-20210813001622-820289 has defined IP address 192.168.39.8 and MAC address 52:54:00:fc:fa:f1 in network mk-test-preload-20210813001622-820289
	I0813 00:18:18.942284  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .GetSSHPort
	I0813 00:18:18.942439  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .GetSSHKeyPath
	I0813 00:18:18.942579  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .GetSSHKeyPath
	I0813 00:18:18.942691  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .GetSSHUsername
	I0813 00:18:18.942804  854283 main.go:130] libmachine: Using SSH client type: native
	I0813 00:18:18.942986  854283 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.8 22 <nil> <nil>}
	I0813 00:18:18.943002  854283 main.go:130] libmachine: About to run SSH command:
	sudo hostname test-preload-20210813001622-820289 && echo "test-preload-20210813001622-820289" | sudo tee /etc/hostname
	I0813 00:18:19.080186  854283 main.go:130] libmachine: SSH cmd err, output: <nil>: test-preload-20210813001622-820289
	
	I0813 00:18:19.080226  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .GetSSHHostname
	I0813 00:18:19.084941  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) DBG | domain test-preload-20210813001622-820289 has defined MAC address 52:54:00:fc:fa:f1 in network mk-test-preload-20210813001622-820289
	I0813 00:18:19.085251  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:fa:f1", ip: ""} in network mk-test-preload-20210813001622-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:16:40 +0000 UTC Type:0 Mac:52:54:00:fc:fa:f1 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:test-preload-20210813001622-820289 Clientid:01:52:54:00:fc:fa:f1}
	I0813 00:18:19.085281  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) DBG | domain test-preload-20210813001622-820289 has defined IP address 192.168.39.8 and MAC address 52:54:00:fc:fa:f1 in network mk-test-preload-20210813001622-820289
	I0813 00:18:19.085404  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .GetSSHPort
	I0813 00:18:19.085587  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .GetSSHKeyPath
	I0813 00:18:19.085716  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .GetSSHKeyPath
	I0813 00:18:19.085843  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .GetSSHUsername
	I0813 00:18:19.085987  854283 main.go:130] libmachine: Using SSH client type: native
	I0813 00:18:19.086101  854283 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.8 22 <nil> <nil>}
	I0813 00:18:19.086118  854283 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-20210813001622-820289' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-20210813001622-820289/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-20210813001622-820289' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 00:18:19.221470  854283 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 00:18:19.221501  854283 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem ServerCertRemotePath
:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube}
	I0813 00:18:19.221570  854283 buildroot.go:174] setting up certificates
	I0813 00:18:19.221582  854283 provision.go:83] configureAuth start
	I0813 00:18:19.221649  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .GetMachineName
	I0813 00:18:19.221866  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .GetIP
	I0813 00:18:19.226558  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) DBG | domain test-preload-20210813001622-820289 has defined MAC address 52:54:00:fc:fa:f1 in network mk-test-preload-20210813001622-820289
	I0813 00:18:19.226855  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:fa:f1", ip: ""} in network mk-test-preload-20210813001622-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:16:40 +0000 UTC Type:0 Mac:52:54:00:fc:fa:f1 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:test-preload-20210813001622-820289 Clientid:01:52:54:00:fc:fa:f1}
	I0813 00:18:19.226884  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) DBG | domain test-preload-20210813001622-820289 has defined IP address 192.168.39.8 and MAC address 52:54:00:fc:fa:f1 in network mk-test-preload-20210813001622-820289
	I0813 00:18:19.227038  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .GetSSHHostname
	I0813 00:18:19.231098  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) DBG | domain test-preload-20210813001622-820289 has defined MAC address 52:54:00:fc:fa:f1 in network mk-test-preload-20210813001622-820289
	I0813 00:18:19.231374  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:fa:f1", ip: ""} in network mk-test-preload-20210813001622-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:16:40 +0000 UTC Type:0 Mac:52:54:00:fc:fa:f1 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:test-preload-20210813001622-820289 Clientid:01:52:54:00:fc:fa:f1}
	I0813 00:18:19.231394  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) DBG | domain test-preload-20210813001622-820289 has defined IP address 192.168.39.8 and MAC address 52:54:00:fc:fa:f1 in network mk-test-preload-20210813001622-820289
	I0813 00:18:19.231527  854283 provision.go:137] copyHostCerts
	I0813 00:18:19.231584  854283 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem, removing ...
	I0813 00:18:19.231593  854283 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem
	I0813 00:18:19.231647  854283 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem (1078 bytes)
	I0813 00:18:19.231742  854283 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem, removing ...
	I0813 00:18:19.231751  854283 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem
	I0813 00:18:19.231774  854283 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem (1123 bytes)
	I0813 00:18:19.231835  854283 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem, removing ...
	I0813 00:18:19.231842  854283 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem
	I0813 00:18:19.231860  854283 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem (1679 bytes)
	I0813 00:18:19.231909  854283 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem org=jenkins.test-preload-20210813001622-820289 san=[192.168.39.8 192.168.39.8 localhost 127.0.0.1 minikube test-preload-20210813001622-820289]
	I0813 00:18:19.565084  854283 provision.go:171] copyRemoteCerts
	I0813 00:18:19.565143  854283 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 00:18:19.565174  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .GetSSHHostname
	I0813 00:18:19.570313  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) DBG | domain test-preload-20210813001622-820289 has defined MAC address 52:54:00:fc:fa:f1 in network mk-test-preload-20210813001622-820289
	I0813 00:18:19.570615  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:fa:f1", ip: ""} in network mk-test-preload-20210813001622-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:16:40 +0000 UTC Type:0 Mac:52:54:00:fc:fa:f1 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:test-preload-20210813001622-820289 Clientid:01:52:54:00:fc:fa:f1}
	I0813 00:18:19.570654  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) DBG | domain test-preload-20210813001622-820289 has defined IP address 192.168.39.8 and MAC address 52:54:00:fc:fa:f1 in network mk-test-preload-20210813001622-820289
	I0813 00:18:19.570741  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .GetSSHPort
	I0813 00:18:19.570972  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .GetSSHKeyPath
	I0813 00:18:19.571155  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .GetSSHUsername
	I0813 00:18:19.571289  854283 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/test-preload-20210813001622-820289/id_rsa Username:docker}
	I0813 00:18:19.662342  854283 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 00:18:19.679493  854283 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem --> /etc/docker/server.pem (1273 bytes)
	I0813 00:18:19.697378  854283 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0813 00:18:19.713952  854283 provision.go:86] duration metric: configureAuth took 492.356141ms
	I0813 00:18:19.713970  854283 buildroot.go:189] setting minikube options for container-runtime
	I0813 00:18:19.714206  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .GetSSHHostname
	I0813 00:18:19.719148  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) DBG | domain test-preload-20210813001622-820289 has defined MAC address 52:54:00:fc:fa:f1 in network mk-test-preload-20210813001622-820289
	I0813 00:18:19.719448  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:fa:f1", ip: ""} in network mk-test-preload-20210813001622-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:16:40 +0000 UTC Type:0 Mac:52:54:00:fc:fa:f1 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:test-preload-20210813001622-820289 Clientid:01:52:54:00:fc:fa:f1}
	I0813 00:18:19.719477  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) DBG | domain test-preload-20210813001622-820289 has defined IP address 192.168.39.8 and MAC address 52:54:00:fc:fa:f1 in network mk-test-preload-20210813001622-820289
	I0813 00:18:19.719608  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .GetSSHPort
	I0813 00:18:19.719821  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .GetSSHKeyPath
	I0813 00:18:19.719965  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .GetSSHKeyPath
	I0813 00:18:19.720118  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .GetSSHUsername
	I0813 00:18:19.720260  854283 main.go:130] libmachine: Using SSH client type: native
	I0813 00:18:19.720401  854283 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.8 22 <nil> <nil>}
	I0813 00:18:19.720416  854283 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 00:18:20.577562  854283 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.3
	I0813 00:18:20.593506  854283 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.3
	I0813 00:18:20.659079  854283 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.3
	I0813 00:18:20.964579  854283 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 00:18:20.964610  854283 machine.go:91] provisioned docker machine in 2.027765032s
	I0813 00:18:20.964623  854283 start.go:267] post-start starting for "test-preload-20210813001622-820289" (driver="kvm2")
	I0813 00:18:20.964631  854283 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 00:18:20.964654  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .DriverName
	I0813 00:18:20.964950  854283 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 00:18:20.965001  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .GetSSHHostname
	I0813 00:18:20.971550  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) DBG | domain test-preload-20210813001622-820289 has defined MAC address 52:54:00:fc:fa:f1 in network mk-test-preload-20210813001622-820289
	I0813 00:18:20.971953  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:fa:f1", ip: ""} in network mk-test-preload-20210813001622-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:16:40 +0000 UTC Type:0 Mac:52:54:00:fc:fa:f1 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:test-preload-20210813001622-820289 Clientid:01:52:54:00:fc:fa:f1}
	I0813 00:18:20.971994  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) DBG | domain test-preload-20210813001622-820289 has defined IP address 192.168.39.8 and MAC address 52:54:00:fc:fa:f1 in network mk-test-preload-20210813001622-820289
	I0813 00:18:20.972143  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .GetSSHPort
	I0813 00:18:20.972329  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .GetSSHKeyPath
	I0813 00:18:20.972454  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .GetSSHUsername
	I0813 00:18:20.972538  854283 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/test-preload-20210813001622-820289/id_rsa Username:docker}
	I0813 00:18:21.063237  854283 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 00:18:21.068801  854283 info.go:137] Remote host: Buildroot 2020.02.12
	I0813 00:18:21.068827  854283 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/addons for local assets ...
	I0813 00:18:21.068879  854283 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files for local assets ...
	I0813 00:18:21.068983  854283 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/8202892.pem -> 8202892.pem in /etc/ssl/certs
	I0813 00:18:21.069092  854283 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 00:18:21.077742  854283 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/8202892.pem --> /etc/ssl/certs/8202892.pem (1708 bytes)
	I0813 00:18:21.099327  854283 start.go:270] post-start completed in 134.69146ms
	I0813 00:18:21.099356  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .DriverName
	I0813 00:18:21.099587  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .GetSSHHostname
	I0813 00:18:21.105908  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) DBG | domain test-preload-20210813001622-820289 has defined MAC address 52:54:00:fc:fa:f1 in network mk-test-preload-20210813001622-820289
	I0813 00:18:21.106263  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:fa:f1", ip: ""} in network mk-test-preload-20210813001622-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:16:40 +0000 UTC Type:0 Mac:52:54:00:fc:fa:f1 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:test-preload-20210813001622-820289 Clientid:01:52:54:00:fc:fa:f1}
	I0813 00:18:21.106295  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) DBG | domain test-preload-20210813001622-820289 has defined IP address 192.168.39.8 and MAC address 52:54:00:fc:fa:f1 in network mk-test-preload-20210813001622-820289
	I0813 00:18:21.106459  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .GetSSHPort
	I0813 00:18:21.106634  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .GetSSHKeyPath
	I0813 00:18:21.106773  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .GetSSHKeyPath
	I0813 00:18:21.106972  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .GetSSHUsername
	I0813 00:18:21.107125  854283 main.go:130] libmachine: Using SSH client type: native
	I0813 00:18:21.107294  854283 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.39.8 22 <nil> <nil>}
	I0813 00:18:21.107308  854283 main.go:130] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0813 00:18:21.148927  854283 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.3 exists
	I0813 00:18:21.148978  854283 cache.go:97] cache image "k8s.gcr.io/kube-apiserver:v1.17.3" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.3" took 2.230738606s
	I0813 00:18:21.148996  854283 cache.go:81] save to tar file k8s.gcr.io/kube-apiserver:v1.17.3 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.3 succeeded
	I0813 00:18:21.191500  854283 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.3 exists
	I0813 00:18:21.191559  854283 cache.go:97] cache image "k8s.gcr.io/kube-scheduler:v1.17.3" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.3" took 2.273276522s
	I0813 00:18:21.191581  854283 cache.go:81] save to tar file k8s.gcr.io/kube-scheduler:v1.17.3 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.3 succeeded
	I0813 00:18:21.199932  854283 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.3 exists
	I0813 00:18:21.199967  854283 cache.go:97] cache image "k8s.gcr.io/kube-controller-manager:v1.17.3" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.3" took 2.281760265s
	I0813 00:18:21.199998  854283 cache.go:81] save to tar file k8s.gcr.io/kube-controller-manager:v1.17.3 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.3 succeeded
	I0813 00:18:21.240522  854283 main.go:130] libmachine: SSH cmd err, output: <nil>: 1628813901.240327573
	
	I0813 00:18:21.240549  854283 fix.go:212] guest clock: 1628813901.240327573
	I0813 00:18:21.240559  854283 fix.go:225] Guest: 2021-08-13 00:18:21.240327573 +0000 UTC Remote: 2021-08-13 00:18:21.099553865 +0000 UTC m=+2.366520239 (delta=140.773708ms)
	I0813 00:18:21.240583  854283 fix.go:196] guest clock delta is within tolerance: 140.773708ms
	I0813 00:18:21.240591  854283 fix.go:57] fixHost completed within 2.321928433s
	I0813 00:18:21.240601  854283 start.go:80] releasing machines lock for "test-preload-20210813001622-820289", held for 2.322001913s
	I0813 00:18:21.240636  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .DriverName
	I0813 00:18:21.240899  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .GetIP
	I0813 00:18:21.246186  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) DBG | domain test-preload-20210813001622-820289 has defined MAC address 52:54:00:fc:fa:f1 in network mk-test-preload-20210813001622-820289
	I0813 00:18:21.246513  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:fa:f1", ip: ""} in network mk-test-preload-20210813001622-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:16:40 +0000 UTC Type:0 Mac:52:54:00:fc:fa:f1 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:test-preload-20210813001622-820289 Clientid:01:52:54:00:fc:fa:f1}
	I0813 00:18:21.246543  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) DBG | domain test-preload-20210813001622-820289 has defined IP address 192.168.39.8 and MAC address 52:54:00:fc:fa:f1 in network mk-test-preload-20210813001622-820289
	I0813 00:18:21.246670  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .DriverName
	I0813 00:18:21.246870  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .DriverName
	I0813 00:18:21.247379  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .DriverName
	I0813 00:18:21.247663  854283 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 00:18:21.247701  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .GetSSHHostname
	I0813 00:18:21.252244  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) DBG | domain test-preload-20210813001622-820289 has defined MAC address 52:54:00:fc:fa:f1 in network mk-test-preload-20210813001622-820289
	I0813 00:18:21.252532  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:fa:f1", ip: ""} in network mk-test-preload-20210813001622-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:16:40 +0000 UTC Type:0 Mac:52:54:00:fc:fa:f1 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:test-preload-20210813001622-820289 Clientid:01:52:54:00:fc:fa:f1}
	I0813 00:18:21.252563  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) DBG | domain test-preload-20210813001622-820289 has defined IP address 192.168.39.8 and MAC address 52:54:00:fc:fa:f1 in network mk-test-preload-20210813001622-820289
	I0813 00:18:21.252654  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .GetSSHPort
	I0813 00:18:21.252854  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .GetSSHKeyPath
	I0813 00:18:21.252999  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .GetSSHUsername
	I0813 00:18:21.253146  854283 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/test-preload-20210813001622-820289/id_rsa Username:docker}
	I0813 00:18:22.156269  854283 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.3
	I0813 00:18:22.776653  854283 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.3 exists
	I0813 00:18:22.776698  854283 cache.go:97] cache image "k8s.gcr.io/kube-proxy:v1.17.3" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.3" took 3.858395868s
	I0813 00:18:22.776719  854283 cache.go:81] save to tar file k8s.gcr.io/kube-proxy:v1.17.3 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.3 succeeded
	I0813 00:18:22.776744  854283 cache.go:88] Successfully saved all images to host disk.
	I0813 00:18:22.776791  854283 ssh_runner.go:149] Run: systemctl --version
	I0813 00:18:22.783420  854283 preload.go:131] Checking if preload exists for k8s version v1.17.3 and runtime crio
	I0813 00:18:22.783493  854283 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 00:18:22.796318  854283 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 00:18:22.806490  854283 docker.go:153] disabling docker service ...
	I0813 00:18:22.806528  854283 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 00:18:22.817352  854283 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 00:18:22.827264  854283 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 00:18:23.044183  854283 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 00:18:23.232753  854283 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 00:18:23.244999  854283 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 00:18:23.258594  854283 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.1"|' -i /etc/crio/crio.conf"
	I0813 00:18:23.267370  854283 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 00:18:23.274393  854283 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 00:18:23.282554  854283 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 00:18:23.465685  854283 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 00:18:23.751095  854283 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 00:18:23.751177  854283 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 00:18:23.757394  854283 start.go:417] Will wait 60s for crictl version
	I0813 00:18:23.757460  854283 ssh_runner.go:149] Run: sudo crictl version
	I0813 00:18:23.792614  854283 start.go:426] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.2
	RuntimeApiVersion:  v1alpha1
	I0813 00:18:23.792707  854283 ssh_runner.go:149] Run: crio --version
	I0813 00:18:24.029788  854283 ssh_runner.go:149] Run: crio --version
	I0813 00:18:24.207643  854283 out.go:177] * Preparing Kubernetes v1.17.3 on CRI-O 1.20.2 ...
	I0813 00:18:24.207684  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .GetIP
	I0813 00:18:24.212704  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) DBG | domain test-preload-20210813001622-820289 has defined MAC address 52:54:00:fc:fa:f1 in network mk-test-preload-20210813001622-820289
	I0813 00:18:24.213090  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:fa:f1", ip: ""} in network mk-test-preload-20210813001622-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:16:40 +0000 UTC Type:0 Mac:52:54:00:fc:fa:f1 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:test-preload-20210813001622-820289 Clientid:01:52:54:00:fc:fa:f1}
	I0813 00:18:24.213124  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) DBG | domain test-preload-20210813001622-820289 has defined IP address 192.168.39.8 and MAC address 52:54:00:fc:fa:f1 in network mk-test-preload-20210813001622-820289
	I0813 00:18:24.213320  854283 ssh_runner.go:149] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0813 00:18:24.217731  854283 preload.go:131] Checking if preload exists for k8s version v1.17.3 and runtime crio
	I0813 00:18:24.217780  854283 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 00:18:24.262825  854283 crio.go:420] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.17.3". assuming images are not preloaded.
	I0813 00:18:24.262848  854283 cache_images.go:78] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.17.3 k8s.gcr.io/kube-controller-manager:v1.17.3 k8s.gcr.io/kube-scheduler:v1.17.3 k8s.gcr.io/kube-proxy:v1.17.3 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.4.3-0 k8s.gcr.io/coredns:1.6.5 gcr.io/k8s-minikube/storage-provisioner:v5 docker.io/kubernetesui/dashboard:v2.1.0 docker.io/kubernetesui/metrics-scraper:v1.0.4]
	I0813 00:18:24.262905  854283 image.go:133] retrieving image: docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0813 00:18:24.262936  854283 image.go:133] retrieving image: k8s.gcr.io/pause:3.1
	I0813 00:18:24.262948  854283 image.go:133] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 00:18:24.263005  854283 image.go:133] retrieving image: docker.io/kubernetesui/dashboard:v2.1.0
	I0813 00:18:24.263072  854283 image.go:133] retrieving image: k8s.gcr.io/etcd:3.4.3-0
	I0813 00:18:24.263096  854283 image.go:133] retrieving image: k8s.gcr.io/kube-proxy:v1.17.3
	I0813 00:18:24.263158  854283 image.go:133] retrieving image: k8s.gcr.io/coredns:1.6.5
	I0813 00:18:24.263158  854283 image.go:133] retrieving image: k8s.gcr.io/kube-apiserver:v1.17.3
	I0813 00:18:24.263214  854283 image.go:133] retrieving image: k8s.gcr.io/kube-controller-manager:v1.17.3
	I0813 00:18:24.263365  854283 image.go:133] retrieving image: k8s.gcr.io/kube-scheduler:v1.17.3
	I0813 00:18:24.263816  854283 image.go:175] daemon lookup for k8s.gcr.io/kube-proxy:v1.17.3: Error response from daemon: reference does not exist
	I0813 00:18:24.264247  854283 image.go:175] daemon lookup for k8s.gcr.io/kube-apiserver:v1.17.3: Error response from daemon: reference does not exist
	I0813 00:18:24.264323  854283 image.go:175] daemon lookup for k8s.gcr.io/kube-scheduler:v1.17.3: Error response from daemon: reference does not exist
	I0813 00:18:24.264399  854283 image.go:175] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.17.3: Error response from daemon: reference does not exist
	I0813 00:18:24.276157  854283 image.go:171] found k8s.gcr.io/pause:3.1 locally: &{Image:0xc000391b00}
	I0813 00:18:24.276241  854283 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/pause:3.1
	I0813 00:18:24.362354  854283 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-proxy:v1.17.3
	I0813 00:18:24.584897  854283 cache_images.go:106] "k8s.gcr.io/kube-proxy:v1.17.3" needs transfer: "k8s.gcr.io/kube-proxy:v1.17.3" does not exist at hash "ae853e93800dc2572aeb425e5765cf9b25212bfc43695299e61dece06cffa4a1" in container runtime
	I0813 00:18:24.584952  854283 cri.go:205] Removing image: k8s.gcr.io/kube-proxy:v1.17.3
	I0813 00:18:24.584993  854283 ssh_runner.go:149] Run: which crictl
	I0813 00:18:24.591348  854283 ssh_runner.go:149] Run: sudo /bin/crictl rmi k8s.gcr.io/kube-proxy:v1.17.3
	I0813 00:18:24.652745  854283 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.3
	I0813 00:18:24.652833  854283 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.17.3
	I0813 00:18:24.662909  854283 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-proxy_v1.17.3: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.17.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-proxy_v1.17.3': No such file or directory
	I0813 00:18:24.662944  854283 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.3 --> /var/lib/minikube/images/kube-proxy_v1.17.3 (48706048 bytes)
	I0813 00:18:24.690518  854283 image.go:171] found gcr.io/k8s-minikube/storage-provisioner:v5 locally: &{Image:0xc000ef60c0}
	I0813 00:18:24.690623  854283 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 00:18:24.791912  854283 image.go:171] found index.docker.io/kubernetesui/metrics-scraper:v1.0.4 locally: &{Image:0xc000ef60c0}
	I0813 00:18:24.792014  854283 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0813 00:18:24.810236  854283 image.go:171] found k8s.gcr.io/coredns:1.6.5 locally: &{Image:0xc000ef6420}
	I0813 00:18:24.810332  854283 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/coredns:1.6.5
	I0813 00:18:25.195163  854283 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-apiserver:v1.17.3
	I0813 00:18:25.197803  854283 crio.go:191] Loading image: /var/lib/minikube/images/kube-proxy_v1.17.3
	I0813 00:18:25.197854  854283 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.17.3
	I0813 00:18:25.409607  854283 cache_images.go:106] "k8s.gcr.io/kube-apiserver:v1.17.3" needs transfer: "k8s.gcr.io/kube-apiserver:v1.17.3" does not exist at hash "90d27391b7808cde8d9a81cfa43b1e81de5c4912b4b52a7dccb19eb4fe3c236b" in container runtime
	I0813 00:18:25.409664  854283 cri.go:205] Removing image: k8s.gcr.io/kube-apiserver:v1.17.3
	I0813 00:18:25.409710  854283 ssh_runner.go:149] Run: which crictl
	I0813 00:18:26.005051  854283 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-controller-manager:v1.17.3
	I0813 00:18:26.198537  854283 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-scheduler:v1.17.3
	I0813 00:18:26.760941  854283 image.go:171] found index.docker.io/kubernetesui/dashboard:v2.1.0 locally: &{Image:0xc000cac3e0}
	I0813 00:18:26.761061  854283 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} docker.io/kubernetesui/dashboard:v2.1.0
	I0813 00:18:27.248374  854283 image.go:171] found k8s.gcr.io/etcd:3.4.3-0 locally: &{Image:0xc000390300}
	I0813 00:18:27.248488  854283 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/etcd:3.4.3-0
	I0813 00:18:28.043388  854283 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.17.3: (2.845508976s)
	I0813 00:18:28.043421  854283 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.3 from cache
	I0813 00:18:28.043454  854283 ssh_runner.go:189] Completed: which crictl: (2.633720413s)
	I0813 00:18:28.043504  854283 ssh_runner.go:189] Completed: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-controller-manager:v1.17.3: (2.038423731s)
	I0813 00:18:28.043543  854283 cache_images.go:106] "k8s.gcr.io/kube-controller-manager:v1.17.3" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.17.3" does not exist at hash "b0f1517c1f4bb153597033d2efd81a9ac630e6a569307f993b2c0368afcf0302" in container runtime
	I0813 00:18:28.043508  854283 ssh_runner.go:149] Run: sudo /bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.17.3
	I0813 00:18:28.043589  854283 cri.go:205] Removing image: k8s.gcr.io/kube-controller-manager:v1.17.3
	I0813 00:18:28.043614  854283 ssh_runner.go:189] Completed: sudo podman image inspect --format {{.Id}} docker.io/kubernetesui/dashboard:v2.1.0: (1.282537027s)
	I0813 00:18:28.043628  854283 ssh_runner.go:149] Run: which crictl
	I0813 00:18:28.043574  854283 ssh_runner.go:189] Completed: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-scheduler:v1.17.3: (1.844952832s)
	I0813 00:18:28.043668  854283 cache_images.go:106] "k8s.gcr.io/kube-scheduler:v1.17.3" needs transfer: "k8s.gcr.io/kube-scheduler:v1.17.3" does not exist at hash "d109c0821a2b9225b69b99a95000df5cd1de5d606bc187b3620d730d7769c6ad" in container runtime
	I0813 00:18:28.043689  854283 cri.go:205] Removing image: k8s.gcr.io/kube-scheduler:v1.17.3
	I0813 00:18:28.043727  854283 ssh_runner.go:149] Run: which crictl
	I0813 00:18:28.088218  854283 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.3
	I0813 00:18:28.088285  854283 ssh_runner.go:149] Run: sudo /bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.17.3
	I0813 00:18:28.088299  854283 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.17.3
	I0813 00:18:28.088289  854283 ssh_runner.go:149] Run: sudo /bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.17.3
	I0813 00:18:28.143773  854283 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.3
	I0813 00:18:28.143862  854283 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.17.3
	I0813 00:18:28.149717  854283 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.3
	I0813 00:18:28.149770  854283 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-apiserver_v1.17.3: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.17.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-apiserver_v1.17.3': No such file or directory
	I0813 00:18:28.149798  854283 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.17.3
	I0813 00:18:28.149801  854283 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.3 --> /var/lib/minikube/images/kube-apiserver_v1.17.3 (50635776 bytes)
	I0813 00:18:28.156306  854283 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-scheduler_v1.17.3: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.17.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-scheduler_v1.17.3': No such file or directory
	I0813 00:18:28.156341  854283 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.3 --> /var/lib/minikube/images/kube-scheduler_v1.17.3 (33822208 bytes)
	I0813 00:18:28.169868  854283 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-controller-manager_v1.17.3: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.17.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-controller-manager_v1.17.3': No such file or directory
	I0813 00:18:28.169891  854283 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.3 --> /var/lib/minikube/images/kube-controller-manager_v1.17.3 (48810496 bytes)
	I0813 00:18:28.667734  854283 crio.go:191] Loading image: /var/lib/minikube/images/kube-scheduler_v1.17.3
	I0813 00:18:28.667822  854283 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.17.3
	I0813 00:18:31.489319  854283 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.17.3: (2.821460514s)
	I0813 00:18:31.489348  854283 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.3 from cache
	I0813 00:18:31.489381  854283 crio.go:191] Loading image: /var/lib/minikube/images/kube-apiserver_v1.17.3
	I0813 00:18:31.489422  854283 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.17.3
	I0813 00:18:36.680829  854283 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.17.3: (5.191378043s)
	I0813 00:18:36.680859  854283 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.3 from cache
	I0813 00:18:36.680883  854283 crio.go:191] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.17.3
	I0813 00:18:36.680972  854283 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.17.3
	I0813 00:18:41.483775  854283 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.17.3: (4.802774328s)
	I0813 00:18:41.483807  854283 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.3 from cache
	I0813 00:18:41.483833  854283 cache_images.go:113] Successfully loaded all cached images
	I0813 00:18:41.483842  854283 cache_images.go:82] LoadImages completed in 17.220979525s
	I0813 00:18:41.483925  854283 ssh_runner.go:149] Run: crio config
	I0813 00:18:41.718513  854283 cni.go:93] Creating CNI manager for ""
	I0813 00:18:41.718543  854283 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 00:18:41.718556  854283 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 00:18:41.718575  854283 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.8 APIServerPort:8443 KubernetesVersion:v1.17.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-20210813001622-820289 NodeName:test-preload-20210813001622-820289 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.8"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.39.8 CgroupDriver:systemd ClientCAFi
le:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 00:18:41.718753  854283 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.8
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "test-preload-20210813001622-820289"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.8
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.8"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.17.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 00:18:41.718855  854283 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.17.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --hostname-override=test-preload-20210813001622-820289 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.8 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.17.3 ClusterName:test-preload-20210813001622-820289 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 00:18:41.718909  854283 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.17.3
	I0813 00:18:41.731090  854283 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.17.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.17.3': No such file or directory
	
	Initiating transfer...
	I0813 00:18:41.731152  854283 ssh_runner.go:149] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.17.3
	I0813 00:18:41.737878  854283 download.go:92] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.17.3/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/linux/v1.17.3/kubeadm
	I0813 00:18:41.737894  854283 download.go:92] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.17.3/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.3/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/linux/v1.17.3/kubectl
	I0813 00:18:41.737896  854283 download.go:92] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.17.3/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/linux/v1.17.3/kubelet
	I0813 00:18:42.279188  854283 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.17.3/kubeadm
	I0813 00:18:42.287313  854283 ssh_runner.go:306] existence check for /var/lib/minikube/binaries/v1.17.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.17.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.17.3/kubeadm': No such file or directory
	I0813 00:18:42.287350  854283 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/linux/v1.17.3/kubeadm --> /var/lib/minikube/binaries/v1.17.3/kubeadm (39346176 bytes)
	I0813 00:18:42.312793  854283 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.17.3/kubectl
	I0813 00:18:42.355359  854283 ssh_runner.go:306] existence check for /var/lib/minikube/binaries/v1.17.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.17.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.17.3/kubectl': No such file or directory
	I0813 00:18:42.355439  854283 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/linux/v1.17.3/kubectl --> /var/lib/minikube/binaries/v1.17.3/kubectl (43499520 bytes)
	I0813 00:18:43.075396  854283 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 00:18:43.086868  854283 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0813 00:18:43.121319  854283 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.17.3/kubelet
	I0813 00:18:43.126146  854283 ssh_runner.go:306] existence check for /var/lib/minikube/binaries/v1.17.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.17.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.17.3/kubelet': No such file or directory
	I0813 00:18:43.126179  854283 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/linux/v1.17.3/kubelet --> /var/lib/minikube/binaries/v1.17.3/kubelet (111584792 bytes)
	I0813 00:18:43.701441  854283 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 00:18:43.708827  854283 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (513 bytes)
	I0813 00:18:43.722063  854283 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 00:18:43.733565  854283 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2075 bytes)
	I0813 00:18:43.749138  854283 ssh_runner.go:149] Run: grep 192.168.39.8	control-plane.minikube.internal$ /etc/hosts
	I0813 00:18:43.753708  854283 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/test-preload-20210813001622-820289 for IP: 192.168.39.8
	I0813 00:18:43.753756  854283 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key
	I0813 00:18:43.753773  854283 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key
	I0813 00:18:43.753820  854283 certs.go:290] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/test-preload-20210813001622-820289/client.key
	I0813 00:18:43.753838  854283 certs.go:290] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/test-preload-20210813001622-820289/apiserver.key.8e2e64d5
	I0813 00:18:43.753853  854283 certs.go:290] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/test-preload-20210813001622-820289/proxy-client.key
	I0813 00:18:43.753959  854283 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/820289.pem (1338 bytes)
	W0813 00:18:43.753994  854283 certs.go:369] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/820289_empty.pem, impossibly tiny 0 bytes
	I0813 00:18:43.754004  854283 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 00:18:43.754041  854283 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem (1078 bytes)
	I0813 00:18:43.754070  854283 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem (1123 bytes)
	I0813 00:18:43.754098  854283 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem (1679 bytes)
	I0813 00:18:43.754141  854283 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/8202892.pem (1708 bytes)
	I0813 00:18:43.755121  854283 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/test-preload-20210813001622-820289/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 00:18:43.772891  854283 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/test-preload-20210813001622-820289/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 00:18:43.790344  854283 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/test-preload-20210813001622-820289/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 00:18:43.807796  854283 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/test-preload-20210813001622-820289/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0813 00:18:43.825398  854283 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 00:18:43.844104  854283 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0813 00:18:43.861494  854283 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 00:18:43.878178  854283 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 00:18:43.895809  854283 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/8202892.pem --> /usr/share/ca-certificates/8202892.pem (1708 bytes)
	I0813 00:18:43.912945  854283 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 00:18:43.930386  854283 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/820289.pem --> /usr/share/ca-certificates/820289.pem (1338 bytes)
	I0813 00:18:43.946753  854283 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 00:18:43.958780  854283 ssh_runner.go:149] Run: openssl version
	I0813 00:18:43.965470  854283 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 00:18:43.974688  854283 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 00:18:43.979610  854283 certs.go:416] hashing: -rw-r--r-- 1 root root 1111 Aug 12 23:51 /usr/share/ca-certificates/minikubeCA.pem
	I0813 00:18:43.979663  854283 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 00:18:43.985587  854283 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 00:18:43.992067  854283 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/820289.pem && ln -fs /usr/share/ca-certificates/820289.pem /etc/ssl/certs/820289.pem"
	I0813 00:18:43.999844  854283 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/820289.pem
	I0813 00:18:44.004492  854283 certs.go:416] hashing: -rw-r--r-- 1 root root 1338 Aug 12 23:59 /usr/share/ca-certificates/820289.pem
	I0813 00:18:44.004527  854283 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/820289.pem
	I0813 00:18:44.010379  854283 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/820289.pem /etc/ssl/certs/51391683.0"
	I0813 00:18:44.017318  854283 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8202892.pem && ln -fs /usr/share/ca-certificates/8202892.pem /etc/ssl/certs/8202892.pem"
	I0813 00:18:44.025270  854283 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8202892.pem
	I0813 00:18:44.030170  854283 certs.go:416] hashing: -rw-r--r-- 1 root root 1708 Aug 12 23:59 /usr/share/ca-certificates/8202892.pem
	I0813 00:18:44.030201  854283 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8202892.pem
	I0813 00:18:44.036017  854283 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8202892.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 00:18:44.042569  854283 kubeadm.go:390] StartCluster: {Name:test-preload-20210813001622-820289 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12122/minikube-v1.22.0-1628238775-12122.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.3 ClusterName:test-pr
eload-20210813001622-820289 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.8 Port:8443 KubernetesVersion:v1.17.3 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 00:18:44.042648  854283 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0813 00:18:44.042684  854283 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 00:18:44.076498  854283 cri.go:76] found id: "f2d595a01e7c1af4e0cd4816f9bbf733a0f2002cdd046c625ed5a27ca318b2e9"
	I0813 00:18:44.076519  854283 cri.go:76] found id: "73a27df1d6b5029b8527488f9c8d2a13a7d95ed682bed04b9ad9e5cc294ca29b"
	I0813 00:18:44.076526  854283 cri.go:76] found id: "008b8de52185ba93c748e98111e4051dbef1a3adb3818923a8ad7a9aeb54ce09"
	I0813 00:18:44.076531  854283 cri.go:76] found id: "062f5ddbf3e0e03651ca624abe77646e4edfecbd7795562e39cc8ae45cda3f04"
	I0813 00:18:44.076536  854283 cri.go:76] found id: "e4a8133fd9ebe23ad9db2caf1d929ccb898d91af70632e15166a95a155bfaf8d"
	I0813 00:18:44.076543  854283 cri.go:76] found id: "803caa744a8e8cabf6837af54c4c11725a378fbbb06d219681a16f0281fcfcb3"
	I0813 00:18:44.076548  854283 cri.go:76] found id: "d9a8455b064798364aded95d5385e3bcfb0aaab6c987fe33408efe4b9674ddfd"
	I0813 00:18:44.076553  854283 cri.go:76] found id: "820468f3c8190601605731e6567ffb6f24378dfd0a502efee0ddcba319b2c3b2"
	I0813 00:18:44.076561  854283 cri.go:76] found id: ""
	I0813 00:18:44.076593  854283 ssh_runner.go:149] Run: sudo runc list -f json
	I0813 00:18:44.121749  854283 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"008b8de52185ba93c748e98111e4051dbef1a3adb3818923a8ad7a9aeb54ce09","pid":4314,"status":"running","bundle":"/run/containers/storage/overlay-containers/008b8de52185ba93c748e98111e4051dbef1a3adb3818923a8ad7a9aeb54ce09/userdata","rootfs":"/var/lib/containers/storage/overlay/270a2c6b8614836d1cbc2adb054df0f8993ff55c23c6a68b07b09bba3e115843/merged","created":"2021-08-13T00:18:12.851612186Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"158b70b2","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"158b70b2\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.termination
MessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"008b8de52185ba93c748e98111e4051dbef1a3adb3818923a8ad7a9aeb54ce09","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T00:18:12.689085254Z","io.kubernetes.cri-o.Image":"7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-proxy:v1.17.0","io.kubernetes.cri-o.ImageRef":"7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-h8m4f\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"28c49088-079a-4a84-a17a-db12715f9314\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-h8m4f_28c49088-079a-4a84-a17a-db12715f9314/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/c
ontainers/storage/overlay/270a2c6b8614836d1cbc2adb054df0f8993ff55c23c6a68b07b09bba3e115843/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-h8m4f_kube-system_28c49088-079a-4a84-a17a-db12715f9314_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/db3f74f5f0ce4164b5f36fafad269291cbadd840c411461b50d79e5c00356eb3/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"db3f74f5f0ce4164b5f36fafad269291cbadd840c411461b50d79e5c00356eb3","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-h8m4f_kube-system_28c49088-079a-4a84-a17a-db12715f9314_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet
/pods/28c49088-079a-4a84-a17a-db12715f9314/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/28c49088-079a-4a84-a17a-db12715f9314/containers/kube-proxy/f2e7bd88\",\"readonly\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/28c49088-079a-4a84-a17a-db12715f9314/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/28c49088-079a-4a84-a17a-db12715f9314/volumes/kubernetes.io~secret/kube-proxy-token-cwn5h\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-proxy-h8m4f","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"28c49088-079a-4a84-a17a-db12715f9314","kubernetes.io/config.seen":"2021-08-13T00:18:10.598230278Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.proper
ty.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"062f5ddbf3e0e03651ca624abe77646e4edfecbd7795562e39cc8ae45cda3f04","pid":4224,"status":"running","bundle":"/run/containers/storage/overlay-containers/062f5ddbf3e0e03651ca624abe77646e4edfecbd7795562e39cc8ae45cda3f04/userdata","rootfs":"/var/lib/containers/storage/overlay/88fd72f38f75120a3540b6e2df0441b8ad53e8c613046e3d11facd432241d039/merged","created":"2021-08-13T00:18:12.154884252Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"69c2cb84","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o
.Annotations":"{\"io.kubernetes.container.hash\":\"69c2cb84\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"062f5ddbf3e0e03651ca624abe77646e4edfecbd7795562e39cc8ae45cda3f04","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T00:18:12.021989936Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/coredns:1.6.5","io.kubernetes.cri-o.Ima
geRef":"70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-6955765f44-qd5cs\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"ce64704b-5280-4603-9103-0fc8f906d6eb\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-6955765f44-qd5cs_ce64704b-5280-4603-9103-0fc8f906d6eb/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/88fd72f38f75120a3540b6e2df0441b8ad53e8c613046e3d11facd432241d039/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-6955765f44-qd5cs_kube-system_ce64704b-5280-4603-9103-0fc8f906d6eb_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/13548c1ef1814b42b406bed6c740596b923279393507df00a9ad455293f408dd/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"13548c1ef1814b42b406bed6c740596b923279393507df00a9
ad455293f408dd","io.kubernetes.cri-o.SandboxName":"k8s_coredns-6955765f44-qd5cs_kube-system_ce64704b-5280-4603-9103-0fc8f906d6eb_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/ce64704b-5280-4603-9103-0fc8f906d6eb/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/ce64704b-5280-4603-9103-0fc8f906d6eb/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/ce64704b-5280-4603-9103-0fc8f906d6eb/containers/coredns/4ad60932\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/ce64704b-5280-4603-9103-0fc8f906d6eb/volumes/kubernetes.io~secret/coredns-token-s28fj\",\"readonly\":true}]
","io.kubernetes.pod.name":"coredns-6955765f44-qd5cs","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"ce64704b-5280-4603-9103-0fc8f906d6eb","kubernetes.io/config.seen":"2021-08-13T00:18:10.6630509Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"122a471892cdfac5b97bf97cf90111bbf7d79dce56963540560b2405194fd701","pid":3501,"status":"running","bundle":"/run/containers/storage/overlay-containers/122a471892cdfac5b97bf97cf90111bbf7d79dce56963540560b2405194fd701/userdata","rootfs":"/var/lib/containers/storage/overlay/7725377758b5b2bae9b8455c5d5a3ae3fc091ae210da7001f8d46d1c5732ac84/merged","created":"2021-08-13T00:17:46.689754236Z","annotations":{"component":"kube-scheduler","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config
.hash\":\"bb577061a17ad23cfbbf52e9419bf32a\",\"kubernetes.io/config.seen\":\"2021-08-13T00:17:45.015893424Z\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-podbb577061a17ad23cfbbf52e9419bf32a.slice","io.kubernetes.cri-o.ContainerID":"122a471892cdfac5b97bf97cf90111bbf7d79dce56963540560b2405194fd701","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-scheduler-test-preload-20210813001622-820289_kube-system_bb577061a17ad23cfbbf52e9419bf32a_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T00:17:46.406453077Z","io.kubernetes.cri-o.HostName":"test-preload-20210813001622-820289","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/122a471892cdfac5b97bf97cf90111bbf7d79dce56963540560b2405194fd701/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-scheduler-test-preload-20210813001622-820289","io.kubern
etes.cri-o.Labels":"{\"tier\":\"control-plane\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"bb577061a17ad23cfbbf52e9419bf32a\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-scheduler-test-preload-20210813001622-820289\",\"component\":\"kube-scheduler\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-test-preload-20210813001622-820289_bb577061a17ad23cfbbf52e9419bf32a/122a471892cdfac5b97bf97cf90111bbf7d79dce56963540560b2405194fd701.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler-test-preload-20210813001622-820289\",\"uid\":\"bb577061a17ad23cfbbf52e9419bf32a\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/7725377758b5b2bae9b8455c5d5a3ae3fc091ae210da7001f8d46d1c5732ac84/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-test-preload-20210813001622-820289_kube-system_bb577061a17ad23cfbbf52e9419bf32a_0","io.kubernetes.cri-o.Namespace":"kube-system","io.ku
bernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/122a471892cdfac5b97bf97cf90111bbf7d79dce56963540560b2405194fd701/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"122a471892cdfac5b97bf97cf90111bbf7d79dce56963540560b2405194fd701","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/122a471892cdfac5b97bf97cf90111bbf7d79dce56963540560b2405194fd701/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-test-preload-20210813001622-820289","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"bb577061a17ad23cfbbf52e9419bf32a","kubernetes.io/config.hash":"bb577061a17ad23cfbbf52e9419bf32a","kubernetes.io/config.seen":"2021-08-13T00:17:45.015893424Z","kubernetes.io/config.source":"file","org.systemd.property.Colle
ctMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"13548c1ef1814b42b406bed6c740596b923279393507df00a9ad455293f408dd","pid":4164,"status":"running","bundle":"/run/containers/storage/overlay-containers/13548c1ef1814b42b406bed6c740596b923279393507df00a9ad455293f408dd/userdata","rootfs":"/var/lib/containers/storage/overlay/addf12cead48752bdbc49c8c01413b7024743f06011e5d32f26b13ea711a4b89/merged","created":"2021-08-13T00:18:11.296262896Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T00:18:10.6630509Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CNIResult":"{\"cniVersion\":\"0.4.0\",\"interfaces\":[{\"name\":\"bridge\",\"mac\":\"12:ea:da:75:35:8c\"},{\"name\":\"veth531976da\",\"mac\":\"e6:71:1a:06:80:d1\"},{\"name\":\"eth0\",\"mac\":\"ba:ee:25:a7:14:a9\",\"sandbox\":\"/var/run/netns/a8c21007-5548-438e-b209-83dadf94ab36\"}],\"ips\"
:[{\"version\":\"4\",\"interface\":2,\"address\":\"10.244.0.2/16\",\"gateway\":\"10.244.0.1\"}],\"routes\":[{\"dst\":\"0.0.0.0/0\",\"gw\":\"10.244.0.1\"}],\"dns\":{}}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-podce64704b_5280_4603_9103_0fc8f906d6eb.slice","io.kubernetes.cri-o.ContainerID":"13548c1ef1814b42b406bed6c740596b923279393507df00a9ad455293f408dd","io.kubernetes.cri-o.ContainerName":"k8s_POD_coredns-6955765f44-qd5cs_kube-system_ce64704b-5280-4603-9103-0fc8f906d6eb_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T00:18:11.040972691Z","io.kubernetes.cri-o.HostName":"coredns-6955765f44-qd5cs","io.kubernetes.cri-o.HostNetwork":"false","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/13548c1ef1814b42b406bed6c740596b923279393507df00a9ad455293f408dd/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"coredns-6955765f44-qd5cs","io.kubernetes.cri-o.Labels":"{\"io.kubernetes
.container.name\":\"POD\",\"k8s-app\":\"kube-dns\",\"io.kubernetes.pod.uid\":\"ce64704b-5280-4603-9103-0fc8f906d6eb\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"coredns-6955765f44-qd5cs\",\"pod-template-hash\":\"6955765f44\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-6955765f44-qd5cs_ce64704b-5280-4603-9103-0fc8f906d6eb/13548c1ef1814b42b406bed6c740596b923279393507df00a9ad455293f408dd.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns-6955765f44-qd5cs\",\"uid\":\"ce64704b-5280-4603-9103-0fc8f906d6eb\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/addf12cead48752bdbc49c8c01413b7024743f06011e5d32f26b13ea711a4b89/merged","io.kubernetes.cri-o.Name":"k8s_coredns-6955765f44-qd5cs_kube-system_ce64704b-5280-4603-9103-0fc8f906d6eb_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.Privileged
Runtime":"false","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/13548c1ef1814b42b406bed6c740596b923279393507df00a9ad455293f408dd/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"13548c1ef1814b42b406bed6c740596b923279393507df00a9ad455293f408dd","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/13548c1ef1814b42b406bed6c740596b923279393507df00a9ad455293f408dd/userdata/shm","io.kubernetes.pod.name":"coredns-6955765f44-qd5cs","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"ce64704b-5280-4603-9103-0fc8f906d6eb","k8s-app":"kube-dns","kubernetes.io/config.seen":"2021-08-13T00:18:10.6630509Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-hash":"6955765f44"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"385db44b38c8b7ee4afbd6f499b52d0fa97cdc5e3ebc5fbe7fc1f3ead5c3c1d9","pid":3495,"status
":"running","bundle":"/run/containers/storage/overlay-containers/385db44b38c8b7ee4afbd6f499b52d0fa97cdc5e3ebc5fbe7fc1f3ead5c3c1d9/userdata","rootfs":"/var/lib/containers/storage/overlay/633c8ff31c783cd14c7e4c38e1ff5b819de93021786f96e585894c0a620a00ca/merged","created":"2021-08-13T00:17:46.708325809Z","annotations":{"component":"kube-controller-manager","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T00:17:45.01589114Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"603b914543a305bf066dc8de01ce2232\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-pod603b914543a305bf066dc8de01ce2232.slice","io.kubernetes.cri-o.ContainerID":"385db44b38c8b7ee4afbd6f499b52d0fa97cdc5e3ebc5fbe7fc1f3ead5c3c1d9","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-test-preload-20210813001622-820289_kube-system_603b914543a305bf066dc8de01ce2232_0","io.kubernetes.cri-o.ContainerType":"san
dbox","io.kubernetes.cri-o.Created":"2021-08-13T00:17:46.395035066Z","io.kubernetes.cri-o.HostName":"test-preload-20210813001622-820289","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/385db44b38c8b7ee4afbd6f499b52d0fa97cdc5e3ebc5fbe7fc1f3ead5c3c1d9/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-controller-manager-test-preload-20210813001622-820289","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"603b914543a305bf066dc8de01ce2232\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-controller-manager-test-preload-20210813001622-820289\",\"tier\":\"control-plane\",\"component\":\"kube-controller-manager\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-test-preload-20210813001622-820289_603b914543a305bf066dc8de01ce2232/385db44b38c8b7ee4afbd6f499b52d0fa97cdc5e3eb
c5fbe7fc1f3ead5c3c1d9.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager-test-preload-20210813001622-820289\",\"uid\":\"603b914543a305bf066dc8de01ce2232\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/633c8ff31c783cd14c7e4c38e1ff5b819de93021786f96e585894c0a620a00ca/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-test-preload-20210813001622-820289_kube-system_603b914543a305bf066dc8de01ce2232_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/385db44b38c8b7ee4afbd6f499b52d0fa97cdc5e3ebc5fbe7fc1f3ead5c3c1d9/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"385db44b38c8b7ee4afbd6f499b52d0fa97cdc5e3ebc5fbe7fc1f3ead5c3c1d9","io.kubernetes.cri-o.Secco
mpProfilePath":"","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/385db44b38c8b7ee4afbd6f499b52d0fa97cdc5e3ebc5fbe7fc1f3ead5c3c1d9/userdata/shm","io.kubernetes.pod.name":"kube-controller-manager-test-preload-20210813001622-820289","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"603b914543a305bf066dc8de01ce2232","kubernetes.io/config.hash":"603b914543a305bf066dc8de01ce2232","kubernetes.io/config.seen":"2021-08-13T00:17:45.01589114Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"48fd9f641438f1b90b068df7be0e85b20aa2fd34a5b07eb19eb5234b609893b6","pid":3488,"status":"running","bundle":"/run/containers/storage/overlay-containers/48fd9f641438f1b90b068df7be0e85b20aa2fd34a5b07eb19eb5234b609893b6/userdata","rootfs":"/var/lib/containers/storage/overlay/d536c2aa6dc12954fdacf09203010adfe8af3ad033302545d914d49629d103c4/merged","created":"2021-08-13T00:17
:46.664617653Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T00:17:45.015888601Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"7bfcdeec3c584f675942a6a7a9b0f15d\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-burstable-pod7bfcdeec3c584f675942a6a7a9b0f15d.slice","io.kubernetes.cri-o.ContainerID":"48fd9f641438f1b90b068df7be0e85b20aa2fd34a5b07eb19eb5234b609893b6","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-test-preload-20210813001622-820289_kube-system_7bfcdeec3c584f675942a6a7a9b0f15d_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T00:17:46.410750684Z","io.kubernetes.cri-o.HostName":"test-preload-20210813001622-820289","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/48fd9f641438f1b90b068df7be0e85b20aa2fd34a5
b07eb19eb5234b609893b6/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-apiserver-test-preload-20210813001622-820289","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.name\":\"kube-apiserver-test-preload-20210813001622-820289\",\"io.kubernetes.container.name\":\"POD\",\"tier\":\"control-plane\",\"component\":\"kube-apiserver\",\"io.kubernetes.pod.uid\":\"7bfcdeec3c584f675942a6a7a9b0f15d\",\"io.kubernetes.pod.namespace\":\"kube-system\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-test-preload-20210813001622-820289_7bfcdeec3c584f675942a6a7a9b0f15d/48fd9f641438f1b90b068df7be0e85b20aa2fd34a5b07eb19eb5234b609893b6.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver-test-preload-20210813001622-820289\",\"uid\":\"7bfcdeec3c584f675942a6a7a9b0f15d\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d536c2aa6dc12954fdacf09203010adfe8af3ad033302545d914d49629d103c4/merg
ed","io.kubernetes.cri-o.Name":"k8s_kube-apiserver-test-preload-20210813001622-820289_kube-system_7bfcdeec3c584f675942a6a7a9b0f15d_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/48fd9f641438f1b90b068df7be0e85b20aa2fd34a5b07eb19eb5234b609893b6/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"48fd9f641438f1b90b068df7be0e85b20aa2fd34a5b07eb19eb5234b609893b6","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/48fd9f641438f1b90b068df7be0e85b20aa2fd34a5b07eb19eb5234b609893b6/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-test-preload-20210813001622-820289","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"7bfcdeec3c584f675942a6a7a9b0f15d","k
ubernetes.io/config.hash":"7bfcdeec3c584f675942a6a7a9b0f15d","kubernetes.io/config.seen":"2021-08-13T00:17:45.015888601Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"73a27df1d6b5029b8527488f9c8d2a13a7d95ed682bed04b9ad9e5cc294ca29b","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/73a27df1d6b5029b8527488f9c8d2a13a7d95ed682bed04b9ad9e5cc294ca29b/userdata","rootfs":"/var/lib/containers/storage/overlay/e2066d63d66fdc12dbf779bbd5c4a8a536cce2989d3275e7599c5f706f99a7db/merged","created":"2021-08-13T00:18:13.125043975Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"b47cb70c","io.kubernetes.container.name":"storage-provisioner","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"
io.kubernetes.container.hash\":\"b47cb70c\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"73a27df1d6b5029b8527488f9c8d2a13a7d95ed682bed04b9ad9e5cc294ca29b","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T00:18:12.978199833Z","io.kubernetes.cri-o.Image":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"7a0dcf9e-e408-42b1-b85a-5ad22cab3a5d\"}","io
.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_7a0dcf9e-e408-42b1-b85a-5ad22cab3a5d/storage-provisioner/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/e2066d63d66fdc12dbf779bbd5c4a8a536cce2989d3275e7599c5f706f99a7db/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_storage-provisioner_kube-system_7a0dcf9e-e408-42b1-b85a-5ad22cab3a5d_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/c59c5ebdb47f76ad7bfa9d5622a11e7f59f15e2fddf2955984f1378d3f630c9f/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"c59c5ebdb47f76ad7bfa9d5622a11e7f59f15e2fddf2955984f1378d3f630c9f","io.kubernetes.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_7a0dcf9e-e408-42b1-b85a-5ad22cab3a5d_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-
o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/7a0dcf9e-e408-42b1-b85a-5ad22cab3a5d/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/7a0dcf9e-e408-42b1-b85a-5ad22cab3a5d/containers/storage-provisioner/79e31ca6\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/7a0dcf9e-e408-42b1-b85a-5ad22cab3a5d/volumes/kubernetes.io~secret/storage-provisioner-token-s7dsb\",\"readonly\":true}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"7a0dcf9e-e408-42b1-b85a-5ad22cab3a5d","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile
\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2021-08-13T00:18:11.984721654Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"803caa744a8e8cabf6837af54c4c11725a378fbbb06d219681a16f0281fcfcb3","pid":3620,"status":"running","bundle":"/run/containers/storage/overlay-containers/803caa744a8e8cabf6837af54c4c11725a378fbbb06d219681a16f0281fcfcb3/userdata","rootfs"
:"/var/lib/containers/storage/overlay/a6b03963374a1402fa6c9a71e57cd9e948e657a439e5e2f46cef06d297d60cee/merged","created":"2021-08-13T00:17:47.535915151Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"589bcd22","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"589bcd22\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"803caa744a8e8cabf6837af54c4c11725a378fbbb06d219681a16f0281fcfcb3","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T00:17:47.285641881Z","io.kubernetes.cri-
o.Image":"5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-controller-manager:v1.17.0","io.kubernetes.cri-o.ImageRef":"5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-test-preload-20210813001622-820289\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"603b914543a305bf066dc8de01ce2232\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-test-preload-20210813001622-820289_603b914543a305bf066dc8de01ce2232/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/a6b03963374a1402fa6c9a71e57cd9e948e657a439e5e2f46cef06d297d60cee/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-tes
t-preload-20210813001622-820289_kube-system_603b914543a305bf066dc8de01ce2232_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/385db44b38c8b7ee4afbd6f499b52d0fa97cdc5e3ebc5fbe7fc1f3ead5c3c1d9/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"385db44b38c8b7ee4afbd6f499b52d0fa97cdc5e3ebc5fbe7fc1f3ead5c3c1d9","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-test-preload-20210813001622-820289_kube-system_603b914543a305bf066dc8de01ce2232_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/603b914543a305bf066dc8de01ce2232/containers/kube-controller-manager/43dcf2c2\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/603b914543a305bf066dc8de01ce2232/etc-hosts\",\"readonly\":false},{\"container_path\":\"/et
c/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false}]","io.kubernetes.pod.name":"kube-controller-manager-test-preload-20210813001622-820289","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"603b914543a305bf066dc8de01ce2232","kubernetes.io/config.hash":"603b914543a305bf066dc8de01ce2232","kubernetes.io/config.seen":"2021-08-13T00:17:45.01589114Z","kubernetes.io/config.source":"file","org.systemd.property.Collec
tMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"820468f3c8190601605731e6567ffb6f24378dfd0a502efee0ddcba319b2c3b2","pid":3558,"status":"running","bundle":"/run/containers/storage/overlay-containers/820468f3c8190601605731e6567ffb6f24378dfd0a502efee0ddcba319b2c3b2/userdata","rootfs":"/var/lib/containers/storage/overlay/6a68d0d975ea887aa64ee030c9e59c2a156d2e7187ed46f8f74a775ee72028f7/merged","created":"2021-08-13T00:17:47.276039825Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"99930feb","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"99930feb\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termin
ation-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"820468f3c8190601605731e6567ffb6f24378dfd0a502efee0ddcba319b2c3b2","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T00:17:47.111825896Z","io.kubernetes.cri-o.Image":"78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-scheduler:v1.17.0","io.kubernetes.cri-o.ImageRef":"78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-test-preload-20210813001622-820289\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"bb577061a17ad23cfbbf52e9419bf32a\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-test-preload-20210813001622-820289_bb577061a17ad23cfbbf52e9419bf32a/kube-s
cheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/6a68d0d975ea887aa64ee030c9e59c2a156d2e7187ed46f8f74a775ee72028f7/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-test-preload-20210813001622-820289_kube-system_bb577061a17ad23cfbbf52e9419bf32a_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/122a471892cdfac5b97bf97cf90111bbf7d79dce56963540560b2405194fd701/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"122a471892cdfac5b97bf97cf90111bbf7d79dce56963540560b2405194fd701","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-test-preload-20210813001622-820289_kube-system_bb577061a17ad23cfbbf52e9419bf32a_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet
/pods/bb577061a17ad23cfbbf52e9419bf32a/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/bb577061a17ad23cfbbf52e9419bf32a/containers/kube-scheduler/83063b36\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-scheduler-test-preload-20210813001622-820289","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"bb577061a17ad23cfbbf52e9419bf32a","kubernetes.io/config.hash":"bb577061a17ad23cfbbf52e9419bf32a","kubernetes.io/config.seen":"2021-08-13T00:17:45.015893424Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c59c5ebdb47f76ad7bfa9d5622a11e7f59f15e2fddf2955984f1378d3f630c9f","pid":4279,"status":"running","bundl
e":"/run/containers/storage/overlay-containers/c59c5ebdb47f76ad7bfa9d5622a11e7f59f15e2fddf2955984f1378d3f630c9f/userdata","rootfs":"/var/lib/containers/storage/overlay/0d6fc6ba8b7b0d2ab637a1c9f1003834f1000ad0fa7dfdb7fdd55c9052f70d9e/merged","created":"2021-08-13T00:18:12.517358702Z","annotations":{"addonmanager.kubernetes.io/mode":"Reconcile","integration-test":"storage-provisioner","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T00:18:11.984721654Z\",\"kubernetes.io/config.source\":\"api\",\"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"v1\\\",\\\"kind\\\":\\\"Pod\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"addonmanager.kubernetes.io/mode\\\":\\\"Reconcile\\\",\\\"integration-test\\\":\\\"storage-provisioner\\\"},\\\"name\\\":\\\"storage-provisioner\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"/storage-prov
isioner\\\"],\\\"image\\\":\\\"gcr.io/k8s-minikube/storage-provisioner:v5\\\",\\\"imagePullPolicy\\\":\\\"IfNotPresent\\\",\\\"name\\\":\\\"storage-provisioner\\\",\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"}]}],\\\"hostNetwork\\\":true,\\\"serviceAccountName\\\":\\\"storage-provisioner\\\",\\\"volumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/tmp\\\",\\\"type\\\":\\\"Directory\\\"},\\\"name\\\":\\\"tmp\\\"}]}}\\n\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-besteffort-pod7a0dcf9e_e408_42b1_b85a_5ad22cab3a5d.slice","io.kubernetes.cri-o.ContainerID":"c59c5ebdb47f76ad7bfa9d5622a11e7f59f15e2fddf2955984f1378d3f630c9f","io.kubernetes.cri-o.ContainerName":"k8s_POD_storage-provisioner_kube-system_7a0dcf9e-e408-42b1-b85a-5ad22cab3a5d_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T00:18:12.345900235Z","io.kubernetes.cri-o.HostName":"test-preload-20210813001622-820289","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.Hostnam
ePath":"/var/run/containers/storage/overlay-containers/c59c5ebdb47f76ad7bfa9d5622a11e7f59f15e2fddf2955984f1378d3f630c9f/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"storage-provisioner","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"7a0dcf9e-e408-42b1-b85a-5ad22cab3a5d\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"integration-test\":\"storage-provisioner\",\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_7a0dcf9e-e408-42b1-b85a-5ad22cab3a5d/c59c5ebdb47f76ad7bfa9d5622a11e7f59f15e2fddf2955984f1378d3f630c9f.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\",\"uid\":\"7a0dcf9e-e408-42b1-b85a-5ad22cab3a5d\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/0d6fc6ba8b7b0d2ab637a1c9f1003834f1000ad0fa7df
db7fdd55c9052f70d9e/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_kube-system_7a0dcf9e-e408-42b1-b85a-5ad22cab3a5d_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/c59c5ebdb47f76ad7bfa9d5622a11e7f59f15e2fddf2955984f1378d3f630c9f/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"c59c5ebdb47f76ad7bfa9d5622a11e7f59f15e2fddf2955984f1378d3f630c9f","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/c59c5ebdb47f76ad7bfa9d5622a11e7f59f15e2fddf2955984f1378d3f630c9f/userdata/shm","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"7a0dcf9e-e408-42b1-b85a-5ad22cab3a5d","kubectl.kubernetes.io/last-ap
plied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2021-08-13T00:18:11.984721654Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d9a8455b064798364aded95d5385e3bcfb0aaab6c987fe33408efe4b9674ddfd","pid":3597,"status":"running","bundle":"/run/containers/storage/overl
ay-containers/d9a8455b064798364aded95d5385e3bcfb0aaab6c987fe33408efe4b9674ddfd/userdata","rootfs":"/var/lib/containers/storage/overlay/d65b8e5eec651e543a4fb09bbd1dbf3f44b276ab0401a02cf2e66b6325aa6021/merged","created":"2021-08-13T00:17:47.45559725Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"a3c4bc57","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"a3c4bc57\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"d9a8455b064798364aded95d5385e3bcfb0aaab6c987fe33408efe4b9674ddfd","io.kubernetes.cri-o.ContainerType":"contai
ner","io.kubernetes.cri-o.Created":"2021-08-13T00:17:47.210413209Z","io.kubernetes.cri-o.Image":"0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-apiserver:v1.17.0","io.kubernetes.cri-o.ImageRef":"0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-test-preload-20210813001622-820289\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"7bfcdeec3c584f675942a6a7a9b0f15d\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-test-preload-20210813001622-820289_7bfcdeec3c584f675942a6a7a9b0f15d/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d65b8e5eec651e543a4fb09bbd1dbf3f44b276ab0401a02cf2e66b6325aa6021/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kub
e-apiserver-test-preload-20210813001622-820289_kube-system_7bfcdeec3c584f675942a6a7a9b0f15d_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/48fd9f641438f1b90b068df7be0e85b20aa2fd34a5b07eb19eb5234b609893b6/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"48fd9f641438f1b90b068df7be0e85b20aa2fd34a5b07eb19eb5234b609893b6","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-test-preload-20210813001622-820289_kube-system_7bfcdeec3c584f675942a6a7a9b0f15d_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/7bfcdeec3c584f675942a6a7a9b0f15d/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/7bfcdeec3c584f675942a6a7a9b0f15d/containers/kube-apiserver/d5b27d6b\",\"readonly\":false},{\"container_path\":\"/etc/s
sl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-apiserver-test-preload-20210813001622-820289","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"7bfcdeec3c584f675942a6a7a9b0f15d","kubernetes.io/config.hash":"7bfcdeec3c584f675942a6a7a9b0f15d","kubernetes.io/config.seen":"2021-08-13T00:17:45.015888601Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"db3f74f5f0ce4164b5f36fafad269291cbadd840c411461b50d79e5c00356eb3","pid":4233,"status":"running","bundle":"/run/containers/storage/overlay-containers/db3f74f5f0ce4164b5f36fafad269291c
badd840c411461b50d79e5c00356eb3/userdata","rootfs":"/var/lib/containers/storage/overlay/efb1a76f6ea0a0a5d8e670cb0e949b78ccb3ad0bb0ed48f69b896cda7cb58ab8/merged","created":"2021-08-13T00:18:12.194962655Z","annotations":{"controller-revision-hash":"68bd87b66","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"api\",\"kubernetes.io/config.seen\":\"2021-08-13T00:18:10.598230278Z\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-besteffort-pod28c49088_079a_4a84_a17a_db12715f9314.slice","io.kubernetes.cri-o.ContainerID":"db3f74f5f0ce4164b5f36fafad269291cbadd840c411461b50d79e5c00356eb3","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-proxy-h8m4f_kube-system_28c49088-079a-4a84-a17a-db12715f9314_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T00:18:11.93021271Z","io.kubernetes.cri-o.HostName":"test-preload-20210813001622-820289","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o
.HostnamePath":"/var/run/containers/storage/overlay-containers/db3f74f5f0ce4164b5f36fafad269291cbadd840c411461b50d79e5c00356eb3/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-proxy-h8m4f","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-proxy-h8m4f\",\"pod-template-generation\":\"1\",\"io.kubernetes.container.name\":\"POD\",\"k8s-app\":\"kube-proxy\",\"controller-revision-hash\":\"68bd87b66\",\"io.kubernetes.pod.uid\":\"28c49088-079a-4a84-a17a-db12715f9314\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-h8m4f_28c49088-079a-4a84-a17a-db12715f9314/db3f74f5f0ce4164b5f36fafad269291cbadd840c411461b50d79e5c00356eb3.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy-h8m4f\",\"uid\":\"28c49088-079a-4a84-a17a-db12715f9314\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/efb1a76f6ea0a0a5d8e670cb0e949b78ccb3ad0b
b0ed48f69b896cda7cb58ab8/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy-h8m4f_kube-system_28c49088-079a-4a84-a17a-db12715f9314_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/db3f74f5f0ce4164b5f36fafad269291cbadd840c411461b50d79e5c00356eb3/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"db3f74f5f0ce4164b5f36fafad269291cbadd840c411461b50d79e5c00356eb3","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/db3f74f5f0ce4164b5f36fafad269291cbadd840c411461b50d79e5c00356eb3/userdata/shm","io.kubernetes.pod.name":"kube-proxy-h8m4f","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"28c49088-079a-4a84-a17a-db12715f9314","k8s-app":"kube-proxy","kuberne
tes.io/config.seen":"2021-08-13T00:18:10.598230278Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-generation":"1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e28746aaa68fd1b227bf2c780578b35a1efa0c39623f1c492afac324e0b3bbc9","pid":3461,"status":"running","bundle":"/run/containers/storage/overlay-containers/e28746aaa68fd1b227bf2c780578b35a1efa0c39623f1c492afac324e0b3bbc9/userdata","rootfs":"/var/lib/containers/storage/overlay/23e404de446d30cb51df836a17c3743cc7819e483ff8370aad4f748ea8c20422/merged","created":"2021-08-13T00:17:46.550384275Z","annotations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"dcd7047348b95304f11961ab6d0a99de\",\"kubernetes.io/config.seen\":\"2021-08-13T00:17:45.015882989Z\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"kubepods-besteffort-poddcd7047348b95304f11961ab6d0a99de.slice",
"io.kubernetes.cri-o.ContainerID":"e28746aaa68fd1b227bf2c780578b35a1efa0c39623f1c492afac324e0b3bbc9","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-test-preload-20210813001622-820289_kube-system_dcd7047348b95304f11961ab6d0a99de_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T00:17:46.351680581Z","io.kubernetes.cri-o.HostName":"test-preload-20210813001622-820289","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/e28746aaa68fd1b227bf2c780578b35a1efa0c39623f1c492afac324e0b3bbc9/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"etcd-test-preload-20210813001622-820289","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"etcd-test-preload-20210813001622-820289\",\"io.kubernetes.container.name\":\"POD\",\"component\":\"etcd\",\"tier\":\"control-plane\",\"io.kubernetes.pod.uid\":\"dcd7047348b9
5304f11961ab6d0a99de\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-test-preload-20210813001622-820289_dcd7047348b95304f11961ab6d0a99de/e28746aaa68fd1b227bf2c780578b35a1efa0c39623f1c492afac324e0b3bbc9.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-test-preload-20210813001622-820289\",\"uid\":\"dcd7047348b95304f11961ab6d0a99de\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/23e404de446d30cb51df836a17c3743cc7819e483ff8370aad4f748ea8c20422/merged","io.kubernetes.cri-o.Name":"k8s_etcd-test-preload-20210813001622-820289_kube-system_dcd7047348b95304f11961ab6d0a99de_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/e28746aaa68fd1b227bf2c780578b35a1efa0c39623f1c492afac324e0b3bbc9/userdata/resolv.conf","io.k
ubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"e28746aaa68fd1b227bf2c780578b35a1efa0c39623f1c492afac324e0b3bbc9","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/e28746aaa68fd1b227bf2c780578b35a1efa0c39623f1c492afac324e0b3bbc9/userdata/shm","io.kubernetes.pod.name":"etcd-test-preload-20210813001622-820289","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"dcd7047348b95304f11961ab6d0a99de","kubernetes.io/config.hash":"dcd7047348b95304f11961ab6d0a99de","kubernetes.io/config.seen":"2021-08-13T00:17:45.015882989Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e4a8133fd9ebe23ad9db2caf1d929ccb898d91af70632e15166a95a155bfaf8d","pid":3702,"status":"running","bundle":"/run/containers/storage/overlay-containers/e4a8133fd9ebe23ad9db2caf1d929ccb898d91af70632e15166a95a155bfaf8d/userdata","roo
tfs":"/var/lib/containers/storage/overlay/9ea52cd7b02ebb13a79b952cacac072cf94facc8aa92deca4dea7adcb5ef9d17/merged","created":"2021-08-13T00:17:48.878634968Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"1d5dd31d","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"1d5dd31d\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"e4a8133fd9ebe23ad9db2caf1d929ccb898d91af70632e15166a95a155bfaf8d","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T00:17:47.609220276Z","io.kubernetes.cri-o.Image":"303ce
5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/etcd:3.4.3-0","io.kubernetes.cri-o.ImageRef":"303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-test-preload-20210813001622-820289\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"dcd7047348b95304f11961ab6d0a99de\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-test-preload-20210813001622-820289_dcd7047348b95304f11961ab6d0a99de/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9ea52cd7b02ebb13a79b952cacac072cf94facc8aa92deca4dea7adcb5ef9d17/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-test-preload-20210813001622-820289_kube-system_dcd7047348b95304f11961ab6d0a99de_0","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/e28746
aaa68fd1b227bf2c780578b35a1efa0c39623f1c492afac324e0b3bbc9/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"e28746aaa68fd1b227bf2c780578b35a1efa0c39623f1c492afac324e0b3bbc9","io.kubernetes.cri-o.SandboxName":"k8s_etcd-test-preload-20210813001622-820289_kube-system_dcd7047348b95304f11961ab6d0a99de_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/dcd7047348b95304f11961ab6d0a99de/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/dcd7047348b95304f11961ab6d0a99de/containers/etcd/448ff521\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false}]",
"io.kubernetes.pod.name":"etcd-test-preload-20210813001622-820289","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"dcd7047348b95304f11961ab6d0a99de","kubernetes.io/config.hash":"dcd7047348b95304f11961ab6d0a99de","kubernetes.io/config.seen":"2021-08-13T00:17:45.015882989Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f2d595a01e7c1af4e0cd4816f9bbf733a0f2002cdd046c625ed5a27ca318b2e9","pid":4483,"status":"running","bundle":"/run/containers/storage/overlay-containers/f2d595a01e7c1af4e0cd4816f9bbf733a0f2002cdd046c625ed5a27ca318b2e9/userdata","rootfs":"/var/lib/containers/storage/overlay/ac0d4a018899f159e20e3395fcae6e9c885a7fb7a2aaf8bd859388f316612248/merged","created":"2021-08-13T00:18:13.740727778Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"b47cb70c","io.kub
ernetes.container.name":"storage-provisioner","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"b47cb70c\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"f2d595a01e7c1af4e0cd4816f9bbf733a0f2002cdd046c625ed5a27ca318b2e9","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T00:18:13.620945879Z","io.kubernetes.cri-o.Image":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","i
o.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"7a0dcf9e-e408-42b1-b85a-5ad22cab3a5d\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_7a0dcf9e-e408-42b1-b85a-5ad22cab3a5d/storage-provisioner/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/ac0d4a018899f159e20e3395fcae6e9c885a7fb7a2aaf8bd859388f316612248/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_storage-provisioner_kube-system_7a0dcf9e-e408-42b1-b85a-5ad22cab3a5d_1","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/c59c5ebdb47f76ad7bfa9d5622a11e7f59f15e2fddf2955984f1378d3f630c9f/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"c59c5ebdb47f76ad7bfa9d5622a11e7f59f15e2fddf2955984f1378d3f630c9f","io.kubernete
s.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_7a0dcf9e-e408-42b1-b85a-5ad22cab3a5d_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/7a0dcf9e-e408-42b1-b85a-5ad22cab3a5d/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/7a0dcf9e-e408-42b1-b85a-5ad22cab3a5d/containers/storage-provisioner/9500048d\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/7a0dcf9e-e408-42b1-b85a-5ad22cab3a5d/volumes/kubernetes.io~secret/storage-provisioner-token-s7dsb\",\"readonly\":true}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.termi
nationGracePeriod":"30","io.kubernetes.pod.uid":"7a0dcf9e-e408-42b1-b85a-5ad22cab3a5d","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2021-08-13T00:18:11.984721654Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"
root"}]
	I0813 00:18:44.122487  854283 cri.go:113] list returned 15 containers
	I0813 00:18:44.122502  854283 cri.go:116] container: {ID:008b8de52185ba93c748e98111e4051dbef1a3adb3818923a8ad7a9aeb54ce09 Status:running}
	I0813 00:18:44.122514  854283 cri.go:122] skipping {008b8de52185ba93c748e98111e4051dbef1a3adb3818923a8ad7a9aeb54ce09 running}: state = "running", want "paused"
	I0813 00:18:44.122531  854283 cri.go:116] container: {ID:062f5ddbf3e0e03651ca624abe77646e4edfecbd7795562e39cc8ae45cda3f04 Status:running}
	I0813 00:18:44.122538  854283 cri.go:122] skipping {062f5ddbf3e0e03651ca624abe77646e4edfecbd7795562e39cc8ae45cda3f04 running}: state = "running", want "paused"
	I0813 00:18:44.122545  854283 cri.go:116] container: {ID:122a471892cdfac5b97bf97cf90111bbf7d79dce56963540560b2405194fd701 Status:running}
	I0813 00:18:44.122550  854283 cri.go:118] skipping 122a471892cdfac5b97bf97cf90111bbf7d79dce56963540560b2405194fd701 - not in ps
	I0813 00:18:44.122555  854283 cri.go:116] container: {ID:13548c1ef1814b42b406bed6c740596b923279393507df00a9ad455293f408dd Status:running}
	I0813 00:18:44.122560  854283 cri.go:118] skipping 13548c1ef1814b42b406bed6c740596b923279393507df00a9ad455293f408dd - not in ps
	I0813 00:18:44.122566  854283 cri.go:116] container: {ID:385db44b38c8b7ee4afbd6f499b52d0fa97cdc5e3ebc5fbe7fc1f3ead5c3c1d9 Status:running}
	I0813 00:18:44.122571  854283 cri.go:118] skipping 385db44b38c8b7ee4afbd6f499b52d0fa97cdc5e3ebc5fbe7fc1f3ead5c3c1d9 - not in ps
	I0813 00:18:44.122575  854283 cri.go:116] container: {ID:48fd9f641438f1b90b068df7be0e85b20aa2fd34a5b07eb19eb5234b609893b6 Status:running}
	I0813 00:18:44.122580  854283 cri.go:118] skipping 48fd9f641438f1b90b068df7be0e85b20aa2fd34a5b07eb19eb5234b609893b6 - not in ps
	I0813 00:18:44.122584  854283 cri.go:116] container: {ID:73a27df1d6b5029b8527488f9c8d2a13a7d95ed682bed04b9ad9e5cc294ca29b Status:stopped}
	I0813 00:18:44.122589  854283 cri.go:122] skipping {73a27df1d6b5029b8527488f9c8d2a13a7d95ed682bed04b9ad9e5cc294ca29b stopped}: state = "stopped", want "paused"
	I0813 00:18:44.122595  854283 cri.go:116] container: {ID:803caa744a8e8cabf6837af54c4c11725a378fbbb06d219681a16f0281fcfcb3 Status:running}
	I0813 00:18:44.122601  854283 cri.go:122] skipping {803caa744a8e8cabf6837af54c4c11725a378fbbb06d219681a16f0281fcfcb3 running}: state = "running", want "paused"
	I0813 00:18:44.122611  854283 cri.go:116] container: {ID:820468f3c8190601605731e6567ffb6f24378dfd0a502efee0ddcba319b2c3b2 Status:running}
	I0813 00:18:44.122617  854283 cri.go:122] skipping {820468f3c8190601605731e6567ffb6f24378dfd0a502efee0ddcba319b2c3b2 running}: state = "running", want "paused"
	I0813 00:18:44.122627  854283 cri.go:116] container: {ID:c59c5ebdb47f76ad7bfa9d5622a11e7f59f15e2fddf2955984f1378d3f630c9f Status:running}
	I0813 00:18:44.122634  854283 cri.go:118] skipping c59c5ebdb47f76ad7bfa9d5622a11e7f59f15e2fddf2955984f1378d3f630c9f - not in ps
	I0813 00:18:44.122640  854283 cri.go:116] container: {ID:d9a8455b064798364aded95d5385e3bcfb0aaab6c987fe33408efe4b9674ddfd Status:running}
	I0813 00:18:44.122645  854283 cri.go:122] skipping {d9a8455b064798364aded95d5385e3bcfb0aaab6c987fe33408efe4b9674ddfd running}: state = "running", want "paused"
	I0813 00:18:44.122651  854283 cri.go:116] container: {ID:db3f74f5f0ce4164b5f36fafad269291cbadd840c411461b50d79e5c00356eb3 Status:running}
	I0813 00:18:44.122656  854283 cri.go:118] skipping db3f74f5f0ce4164b5f36fafad269291cbadd840c411461b50d79e5c00356eb3 - not in ps
	I0813 00:18:44.122667  854283 cri.go:116] container: {ID:e28746aaa68fd1b227bf2c780578b35a1efa0c39623f1c492afac324e0b3bbc9 Status:running}
	I0813 00:18:44.122674  854283 cri.go:118] skipping e28746aaa68fd1b227bf2c780578b35a1efa0c39623f1c492afac324e0b3bbc9 - not in ps
	I0813 00:18:44.122678  854283 cri.go:116] container: {ID:e4a8133fd9ebe23ad9db2caf1d929ccb898d91af70632e15166a95a155bfaf8d Status:running}
	I0813 00:18:44.122683  854283 cri.go:122] skipping {e4a8133fd9ebe23ad9db2caf1d929ccb898d91af70632e15166a95a155bfaf8d running}: state = "running", want "paused"
	I0813 00:18:44.122691  854283 cri.go:116] container: {ID:f2d595a01e7c1af4e0cd4816f9bbf733a0f2002cdd046c625ed5a27ca318b2e9 Status:running}
	I0813 00:18:44.122700  854283 cri.go:122] skipping {f2d595a01e7c1af4e0cd4816f9bbf733a0f2002cdd046c625ed5a27ca318b2e9 running}: state = "running", want "paused"
	I0813 00:18:44.122744  854283 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 00:18:44.130125  854283 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0813 00:18:44.130142  854283 kubeadm.go:600] restartCluster start
	I0813 00:18:44.130176  854283 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0813 00:18:44.137595  854283 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0813 00:18:44.138486  854283 kubeconfig.go:93] found "test-preload-20210813001622-820289" server: "https://192.168.39.8:8443"
	I0813 00:18:44.138973  854283 kapi.go:59] client config for test-preload-20210813001622-820289: &rest.Config{Host:"https://192.168.39.8:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/test-preload-20210813001622-820289/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/test-preload-202108
13001622-820289/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 00:18:44.140828  854283 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0813 00:18:44.148038  854283 kubeadm.go:568] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -40,7 +40,7 @@
	     dataDir: /var/lib/minikube/etcd
	     extraArgs:
	       proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.17.0
	+kubernetesVersion: v1.17.3
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I0813 00:18:44.148060  854283 kubeadm.go:1032] stopping kube-system containers ...
	I0813 00:18:44.148074  854283 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0813 00:18:44.148116  854283 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 00:18:44.188072  854283 cri.go:76] found id: "f2d595a01e7c1af4e0cd4816f9bbf733a0f2002cdd046c625ed5a27ca318b2e9"
	I0813 00:18:44.188091  854283 cri.go:76] found id: "73a27df1d6b5029b8527488f9c8d2a13a7d95ed682bed04b9ad9e5cc294ca29b"
	I0813 00:18:44.188095  854283 cri.go:76] found id: "008b8de52185ba93c748e98111e4051dbef1a3adb3818923a8ad7a9aeb54ce09"
	I0813 00:18:44.188099  854283 cri.go:76] found id: "062f5ddbf3e0e03651ca624abe77646e4edfecbd7795562e39cc8ae45cda3f04"
	I0813 00:18:44.188108  854283 cri.go:76] found id: "e4a8133fd9ebe23ad9db2caf1d929ccb898d91af70632e15166a95a155bfaf8d"
	I0813 00:18:44.188113  854283 cri.go:76] found id: "803caa744a8e8cabf6837af54c4c11725a378fbbb06d219681a16f0281fcfcb3"
	I0813 00:18:44.188116  854283 cri.go:76] found id: "d9a8455b064798364aded95d5385e3bcfb0aaab6c987fe33408efe4b9674ddfd"
	I0813 00:18:44.188120  854283 cri.go:76] found id: "820468f3c8190601605731e6567ffb6f24378dfd0a502efee0ddcba319b2c3b2"
	I0813 00:18:44.188123  854283 cri.go:76] found id: ""
	I0813 00:18:44.188128  854283 cri.go:221] Stopping containers: [f2d595a01e7c1af4e0cd4816f9bbf733a0f2002cdd046c625ed5a27ca318b2e9 73a27df1d6b5029b8527488f9c8d2a13a7d95ed682bed04b9ad9e5cc294ca29b 008b8de52185ba93c748e98111e4051dbef1a3adb3818923a8ad7a9aeb54ce09 062f5ddbf3e0e03651ca624abe77646e4edfecbd7795562e39cc8ae45cda3f04 e4a8133fd9ebe23ad9db2caf1d929ccb898d91af70632e15166a95a155bfaf8d 803caa744a8e8cabf6837af54c4c11725a378fbbb06d219681a16f0281fcfcb3 d9a8455b064798364aded95d5385e3bcfb0aaab6c987fe33408efe4b9674ddfd 820468f3c8190601605731e6567ffb6f24378dfd0a502efee0ddcba319b2c3b2]
	I0813 00:18:44.188162  854283 ssh_runner.go:149] Run: which crictl
	I0813 00:18:44.192241  854283 ssh_runner.go:149] Run: sudo /bin/crictl stop f2d595a01e7c1af4e0cd4816f9bbf733a0f2002cdd046c625ed5a27ca318b2e9 73a27df1d6b5029b8527488f9c8d2a13a7d95ed682bed04b9ad9e5cc294ca29b 008b8de52185ba93c748e98111e4051dbef1a3adb3818923a8ad7a9aeb54ce09 062f5ddbf3e0e03651ca624abe77646e4edfecbd7795562e39cc8ae45cda3f04 e4a8133fd9ebe23ad9db2caf1d929ccb898d91af70632e15166a95a155bfaf8d 803caa744a8e8cabf6837af54c4c11725a378fbbb06d219681a16f0281fcfcb3 d9a8455b064798364aded95d5385e3bcfb0aaab6c987fe33408efe4b9674ddfd 820468f3c8190601605731e6567ffb6f24378dfd0a502efee0ddcba319b2c3b2
	I0813 00:18:46.863986  854283 ssh_runner.go:189] Completed: sudo /bin/crictl stop f2d595a01e7c1af4e0cd4816f9bbf733a0f2002cdd046c625ed5a27ca318b2e9 73a27df1d6b5029b8527488f9c8d2a13a7d95ed682bed04b9ad9e5cc294ca29b 008b8de52185ba93c748e98111e4051dbef1a3adb3818923a8ad7a9aeb54ce09 062f5ddbf3e0e03651ca624abe77646e4edfecbd7795562e39cc8ae45cda3f04 e4a8133fd9ebe23ad9db2caf1d929ccb898d91af70632e15166a95a155bfaf8d 803caa744a8e8cabf6837af54c4c11725a378fbbb06d219681a16f0281fcfcb3 d9a8455b064798364aded95d5385e3bcfb0aaab6c987fe33408efe4b9674ddfd 820468f3c8190601605731e6567ffb6f24378dfd0a502efee0ddcba319b2c3b2: (2.671692577s)
	I0813 00:18:46.864062  854283 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0813 00:18:46.877556  854283 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 00:18:46.884433  854283 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5615 Aug 13 00:17 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5651 Aug 13 00:17 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2075 Aug 13 00:17 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5599 Aug 13 00:17 /etc/kubernetes/scheduler.conf
	
	I0813 00:18:46.884493  854283 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0813 00:18:46.890887  854283 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0813 00:18:46.897333  854283 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0813 00:18:46.903568  854283 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0813 00:18:46.909925  854283 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 00:18:46.916537  854283 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0813 00:18:46.916555  854283 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.3:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 00:18:46.983319  854283 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 00:18:47.842950  854283 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.3:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0813 00:18:48.126353  854283 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.3:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 00:18:48.229755  854283 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.3:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0813 00:18:48.394194  854283 api_server.go:50] waiting for apiserver process to appear ...
	I0813 00:18:48.394284  854283 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 00:18:48.909373  854283 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 00:18:49.409241  854283 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 00:18:49.909956  854283 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 00:18:50.409348  854283 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 00:18:50.420940  854283 api_server.go:70] duration metric: took 2.026750325s to wait for apiserver process to appear ...
	I0813 00:18:50.420962  854283 api_server.go:86] waiting for apiserver healthz status ...
	I0813 00:18:50.420971  854283 api_server.go:239] Checking apiserver healthz at https://192.168.39.8:8443/healthz ...
	I0813 00:18:54.601158  854283 api_server.go:265] https://192.168.39.8:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0813 00:18:54.601188  854283 api_server.go:101] status: https://192.168.39.8:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0813 00:18:55.101471  854283 api_server.go:239] Checking apiserver healthz at https://192.168.39.8:8443/healthz ...
	I0813 00:18:55.106542  854283 api_server.go:265] https://192.168.39.8:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 00:18:55.106563  854283 api_server.go:101] status: https://192.168.39.8:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 00:18:55.602047  854283 api_server.go:239] Checking apiserver healthz at https://192.168.39.8:8443/healthz ...
	I0813 00:18:55.609174  854283 api_server.go:265] https://192.168.39.8:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 00:18:55.609201  854283 api_server.go:101] status: https://192.168.39.8:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 00:18:56.102310  854283 api_server.go:239] Checking apiserver healthz at https://192.168.39.8:8443/healthz ...
	I0813 00:18:56.108140  854283 api_server.go:265] https://192.168.39.8:8443/healthz returned 200:
	ok
	I0813 00:18:56.115590  854283 api_server.go:139] control plane version: v1.17.3
	I0813 00:18:56.115611  854283 api_server.go:129] duration metric: took 5.694643626s to wait for apiserver health ...
	I0813 00:18:56.115624  854283 cni.go:93] Creating CNI manager for ""
	I0813 00:18:56.115632  854283 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0813 00:18:56.117749  854283 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 00:18:56.117808  854283 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0813 00:18:56.125989  854283 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0813 00:18:56.139606  854283 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 00:18:56.150238  854283 system_pods.go:59] 7 kube-system pods found
	I0813 00:18:56.150262  854283 system_pods.go:61] "coredns-6955765f44-qd5cs" [ce64704b-5280-4603-9103-0fc8f906d6eb] Running
	I0813 00:18:56.150267  854283 system_pods.go:61] "etcd-test-preload-20210813001622-820289" [f6e89987-f8fa-410a-b70d-6b78603b469b] Running
	I0813 00:18:56.150271  854283 system_pods.go:61] "kube-apiserver-test-preload-20210813001622-820289" [fd4a4611-3c4e-4294-a625-8bba8706dde3] Pending
	I0813 00:18:56.150275  854283 system_pods.go:61] "kube-controller-manager-test-preload-20210813001622-820289" [a97d0cb8-bf69-444c-9804-e784aceb1803] Pending
	I0813 00:18:56.150279  854283 system_pods.go:61] "kube-proxy-h8m4f" [28c49088-079a-4a84-a17a-db12715f9314] Running
	I0813 00:18:56.150282  854283 system_pods.go:61] "kube-scheduler-test-preload-20210813001622-820289" [09483d5e-8196-46b5-831a-52b4198203d3] Pending
	I0813 00:18:56.150286  854283 system_pods.go:61] "storage-provisioner" [7a0dcf9e-e408-42b1-b85a-5ad22cab3a5d] Running
	I0813 00:18:56.150291  854283 system_pods.go:74] duration metric: took 10.672906ms to wait for pod list to return data ...
	I0813 00:18:56.150317  854283 node_conditions.go:102] verifying NodePressure condition ...
	I0813 00:18:56.156174  854283 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 00:18:56.156207  854283 node_conditions.go:123] node cpu capacity is 2
	I0813 00:18:56.156222  854283 node_conditions.go:105] duration metric: took 5.900227ms to run NodePressure ...
	I0813 00:18:56.156241  854283 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.3:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 00:18:56.445220  854283 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0813 00:18:56.451506  854283 kubeadm.go:746] kubelet initialised
	I0813 00:18:56.451524  854283 kubeadm.go:747] duration metric: took 6.270473ms waiting for restarted kubelet to initialise ...
	I0813 00:18:56.451533  854283 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 00:18:56.455812  854283 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6955765f44-qd5cs" in "kube-system" namespace to be "Ready" ...
	I0813 00:18:56.465133  854283 pod_ready.go:92] pod "coredns-6955765f44-qd5cs" in "kube-system" namespace has status "Ready":"True"
	I0813 00:18:56.465151  854283 pod_ready.go:81] duration metric: took 9.318704ms waiting for pod "coredns-6955765f44-qd5cs" in "kube-system" namespace to be "Ready" ...
	I0813 00:18:56.465160  854283 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-20210813001622-820289" in "kube-system" namespace to be "Ready" ...
	I0813 00:18:56.472536  854283 pod_ready.go:92] pod "etcd-test-preload-20210813001622-820289" in "kube-system" namespace has status "Ready":"True"
	I0813 00:18:56.472550  854283 pod_ready.go:81] duration metric: took 7.384321ms waiting for pod "etcd-test-preload-20210813001622-820289" in "kube-system" namespace to be "Ready" ...
	I0813 00:18:56.472558  854283 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-20210813001622-820289" in "kube-system" namespace to be "Ready" ...
	I0813 00:18:57.504780  854283 pod_ready.go:92] pod "kube-apiserver-test-preload-20210813001622-820289" in "kube-system" namespace has status "Ready":"True"
	I0813 00:18:57.504807  854283 pod_ready.go:81] duration metric: took 1.032241679s waiting for pod "kube-apiserver-test-preload-20210813001622-820289" in "kube-system" namespace to be "Ready" ...
	I0813 00:18:57.504818  854283 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-20210813001622-820289" in "kube-system" namespace to be "Ready" ...
	I0813 00:18:59.517782  854283 pod_ready.go:102] pod "kube-controller-manager-test-preload-20210813001622-820289" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0813 00:19:00.018528  854283 pod_ready.go:92] pod "kube-controller-manager-test-preload-20210813001622-820289" in "kube-system" namespace has status "Ready":"True"
	I0813 00:19:00.018562  854283 pod_ready.go:81] duration metric: took 2.513736087s waiting for pod "kube-controller-manager-test-preload-20210813001622-820289" in "kube-system" namespace to be "Ready" ...
	I0813 00:19:00.018576  854283 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-h8m4f" in "kube-system" namespace to be "Ready" ...
	I0813 00:19:00.024375  854283 pod_ready.go:92] pod "kube-proxy-h8m4f" in "kube-system" namespace has status "Ready":"True"
	I0813 00:19:00.024396  854283 pod_ready.go:81] duration metric: took 5.811853ms waiting for pod "kube-proxy-h8m4f" in "kube-system" namespace to be "Ready" ...
	I0813 00:19:00.024407  854283 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-20210813001622-820289" in "kube-system" namespace to be "Ready" ...
	I0813 00:19:00.143961  854283 pod_ready.go:92] pod "kube-scheduler-test-preload-20210813001622-820289" in "kube-system" namespace has status "Ready":"True"
	I0813 00:19:00.143986  854283 pod_ready.go:81] duration metric: took 119.569758ms waiting for pod "kube-scheduler-test-preload-20210813001622-820289" in "kube-system" namespace to be "Ready" ...
	I0813 00:19:00.143999  854283 pod_ready.go:38] duration metric: took 3.692453693s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 00:19:00.144028  854283 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 00:19:00.157151  854283 ops.go:34] apiserver oom_adj: -16
	I0813 00:19:00.157168  854283 kubeadm.go:604] restartCluster took 16.027020135s
	I0813 00:19:00.157175  854283 kubeadm.go:392] StartCluster complete in 16.114612817s
	I0813 00:19:00.157191  854283 settings.go:142] acquiring lock: {Name:mk8798f78c6f0a1d20052a3e99a18e56ee754eb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:19:00.157283  854283 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0813 00:19:00.157859  854283 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig: {Name:mk56dc63045ab5614dcc5cc2eaf1f7d3442c655e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:19:00.158438  854283 kapi.go:59] client config for test-preload-20210813001622-820289: &rest.Config{Host:"https://192.168.39.8:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/test-preload-20210813001622-820289/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/test-preload-202108
13001622-820289/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 00:19:00.673434  854283 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "test-preload-20210813001622-820289" rescaled to 1
	I0813 00:19:00.673497  854283 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.39.8 Port:8443 KubernetesVersion:v1.17.3 ControlPlane:true Worker:true}
	I0813 00:19:00.675532  854283 out.go:177] * Verifying Kubernetes components...
	I0813 00:19:00.675603  854283 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 00:19:00.673553  854283 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 00:19:00.673570  854283 addons.go:342] enableAddons start: toEnable=map[default-storageclass:true storage-provisioner:true], additional=[]
	I0813 00:19:00.675683  854283 addons.go:59] Setting storage-provisioner=true in profile "test-preload-20210813001622-820289"
	I0813 00:19:00.675706  854283 addons.go:135] Setting addon storage-provisioner=true in "test-preload-20210813001622-820289"
	W0813 00:19:00.675733  854283 addons.go:147] addon storage-provisioner should already be in state true
	I0813 00:19:00.675762  854283 host.go:66] Checking if "test-preload-20210813001622-820289" exists ...
	I0813 00:19:00.675689  854283 addons.go:59] Setting default-storageclass=true in profile "test-preload-20210813001622-820289"
	I0813 00:19:00.675790  854283 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-20210813001622-820289"
	I0813 00:19:00.676115  854283 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 00:19:00.676126  854283 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 00:19:00.676155  854283 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 00:19:00.676183  854283 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 00:19:00.690965  854283 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43909
	I0813 00:19:00.691002  854283 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41459
	I0813 00:19:00.691474  854283 main.go:130] libmachine: () Calling .GetVersion
	I0813 00:19:00.691704  854283 main.go:130] libmachine: () Calling .GetVersion
	I0813 00:19:00.692041  854283 main.go:130] libmachine: Using API Version  1
	I0813 00:19:00.692063  854283 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 00:19:00.692194  854283 main.go:130] libmachine: Using API Version  1
	I0813 00:19:00.692227  854283 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 00:19:00.692463  854283 main.go:130] libmachine: () Calling .GetMachineName
	I0813 00:19:00.692592  854283 main.go:130] libmachine: () Calling .GetMachineName
	I0813 00:19:00.692632  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .GetState
	I0813 00:19:00.693153  854283 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 00:19:00.693202  854283 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 00:19:00.696865  854283 kapi.go:59] client config for test-preload-20210813001622-820289: &rest.Config{Host:"https://192.168.39.8:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/test-preload-20210813001622-820289/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/test-preload-202108
13001622-820289/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 00:19:00.703754  854283 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40663
	I0813 00:19:00.704156  854283 main.go:130] libmachine: () Calling .GetVersion
	I0813 00:19:00.704561  854283 main.go:130] libmachine: Using API Version  1
	I0813 00:19:00.704612  854283 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 00:19:00.704928  854283 main.go:130] libmachine: () Calling .GetMachineName
	I0813 00:19:00.705105  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .GetState
	I0813 00:19:00.707615  854283 addons.go:135] Setting addon default-storageclass=true in "test-preload-20210813001622-820289"
	W0813 00:19:00.707636  854283 addons.go:147] addon default-storageclass should already be in state true
	I0813 00:19:00.707664  854283 host.go:66] Checking if "test-preload-20210813001622-820289" exists ...
	I0813 00:19:00.707991  854283 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 00:19:00.708029  854283 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 00:19:00.708361  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .DriverName
	I0813 00:19:00.710535  854283 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 00:19:00.710626  854283 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 00:19:00.710642  854283 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 00:19:00.710662  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .GetSSHHostname
	I0813 00:19:00.716435  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) DBG | domain test-preload-20210813001622-820289 has defined MAC address 52:54:00:fc:fa:f1 in network mk-test-preload-20210813001622-820289
	I0813 00:19:00.716888  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:fa:f1", ip: ""} in network mk-test-preload-20210813001622-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:16:40 +0000 UTC Type:0 Mac:52:54:00:fc:fa:f1 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:test-preload-20210813001622-820289 Clientid:01:52:54:00:fc:fa:f1}
	I0813 00:19:00.716922  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) DBG | domain test-preload-20210813001622-820289 has defined IP address 192.168.39.8 and MAC address 52:54:00:fc:fa:f1 in network mk-test-preload-20210813001622-820289
	I0813 00:19:00.717049  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .GetSSHPort
	I0813 00:19:00.717231  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .GetSSHKeyPath
	I0813 00:19:00.717375  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .GetSSHUsername
	I0813 00:19:00.717540  854283 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/test-preload-20210813001622-820289/id_rsa Username:docker}
	I0813 00:19:00.720431  854283 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:32887
	I0813 00:19:00.720813  854283 main.go:130] libmachine: () Calling .GetVersion
	I0813 00:19:00.721235  854283 main.go:130] libmachine: Using API Version  1
	I0813 00:19:00.721256  854283 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 00:19:00.721557  854283 main.go:130] libmachine: () Calling .GetMachineName
	I0813 00:19:00.722158  854283 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 00:19:00.722215  854283 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 00:19:00.732788  854283 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45253
	I0813 00:19:00.733220  854283 main.go:130] libmachine: () Calling .GetVersion
	I0813 00:19:00.733712  854283 main.go:130] libmachine: Using API Version  1
	I0813 00:19:00.733740  854283 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 00:19:00.734073  854283 main.go:130] libmachine: () Calling .GetMachineName
	I0813 00:19:00.734252  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .GetState
	I0813 00:19:00.737149  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .DriverName
	I0813 00:19:00.737358  854283 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 00:19:00.737373  854283 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 00:19:00.737389  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .GetSSHHostname
	I0813 00:19:00.742585  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) DBG | domain test-preload-20210813001622-820289 has defined MAC address 52:54:00:fc:fa:f1 in network mk-test-preload-20210813001622-820289
	I0813 00:19:00.742974  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:fa:f1", ip: ""} in network mk-test-preload-20210813001622-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:16:40 +0000 UTC Type:0 Mac:52:54:00:fc:fa:f1 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:test-preload-20210813001622-820289 Clientid:01:52:54:00:fc:fa:f1}
	I0813 00:19:00.743008  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) DBG | domain test-preload-20210813001622-820289 has defined IP address 192.168.39.8 and MAC address 52:54:00:fc:fa:f1 in network mk-test-preload-20210813001622-820289
	I0813 00:19:00.743106  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .GetSSHPort
	I0813 00:19:00.743274  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .GetSSHKeyPath
	I0813 00:19:00.743427  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .GetSSHUsername
	I0813 00:19:00.743555  854283 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/test-preload-20210813001622-820289/id_rsa Username:docker}
	I0813 00:19:00.796714  854283 node_ready.go:35] waiting up to 6m0s for node "test-preload-20210813001622-820289" to be "Ready" ...
	I0813 00:19:00.797057  854283 start.go:716] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0813 00:19:00.800083  854283 node_ready.go:49] node "test-preload-20210813001622-820289" has status "Ready":"True"
	I0813 00:19:00.800099  854283 node_ready.go:38] duration metric: took 3.350535ms waiting for node "test-preload-20210813001622-820289" to be "Ready" ...
	I0813 00:19:00.800110  854283 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 00:19:00.805123  854283 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6955765f44-qd5cs" in "kube-system" namespace to be "Ready" ...
	I0813 00:19:00.837239  854283 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.17.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 00:19:00.847737  854283 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.17.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 00:19:01.211502  854283 main.go:130] libmachine: Making call to close driver server
	I0813 00:19:01.211524  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .Close
	I0813 00:19:01.211827  854283 main.go:130] libmachine: Successfully made call to close driver server
	I0813 00:19:01.211846  854283 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 00:19:01.211856  854283 main.go:130] libmachine: Making call to close driver server
	I0813 00:19:01.211861  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) DBG | Closing plugin on server side
	I0813 00:19:01.211866  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .Close
	I0813 00:19:01.212153  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) DBG | Closing plugin on server side
	I0813 00:19:01.212184  854283 main.go:130] libmachine: Successfully made call to close driver server
	I0813 00:19:01.212227  854283 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 00:19:01.239394  854283 main.go:130] libmachine: Making call to close driver server
	I0813 00:19:01.239415  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .Close
	I0813 00:19:01.239776  854283 main.go:130] libmachine: Successfully made call to close driver server
	I0813 00:19:01.239782  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) DBG | Closing plugin on server side
	I0813 00:19:01.239798  854283 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 00:19:01.239810  854283 main.go:130] libmachine: Making call to close driver server
	I0813 00:19:01.239820  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .Close
	I0813 00:19:01.240050  854283 main.go:130] libmachine: Successfully made call to close driver server
	I0813 00:19:01.240068  854283 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 00:19:01.240079  854283 main.go:130] libmachine: Making call to close driver server
	I0813 00:19:01.240083  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) DBG | Closing plugin on server side
	I0813 00:19:01.240090  854283 main.go:130] libmachine: (test-preload-20210813001622-820289) Calling .Close
	I0813 00:19:01.240319  854283 main.go:130] libmachine: Successfully made call to close driver server
	I0813 00:19:01.240344  854283 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 00:19:01.242531  854283 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0813 00:19:01.242552  854283 addons.go:344] enableAddons completed in 568.987666ms
	I0813 00:19:02.450585  854283 pod_ready.go:92] pod "coredns-6955765f44-qd5cs" in "kube-system" namespace has status "Ready":"True"
	I0813 00:19:02.450612  854283 pod_ready.go:81] duration metric: took 1.645461543s waiting for pod "coredns-6955765f44-qd5cs" in "kube-system" namespace to be "Ready" ...
	I0813 00:19:02.450629  854283 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-20210813001622-820289" in "kube-system" namespace to be "Ready" ...
	I0813 00:19:02.544496  854283 pod_ready.go:92] pod "etcd-test-preload-20210813001622-820289" in "kube-system" namespace has status "Ready":"True"
	I0813 00:19:02.544516  854283 pod_ready.go:81] duration metric: took 93.880307ms waiting for pod "etcd-test-preload-20210813001622-820289" in "kube-system" namespace to be "Ready" ...
	I0813 00:19:02.544525  854283 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-20210813001622-820289" in "kube-system" namespace to be "Ready" ...
	I0813 00:19:02.943937  854283 pod_ready.go:92] pod "kube-apiserver-test-preload-20210813001622-820289" in "kube-system" namespace has status "Ready":"True"
	I0813 00:19:02.943963  854283 pod_ready.go:81] duration metric: took 399.430796ms waiting for pod "kube-apiserver-test-preload-20210813001622-820289" in "kube-system" namespace to be "Ready" ...
	I0813 00:19:02.943976  854283 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-20210813001622-820289" in "kube-system" namespace to be "Ready" ...
	I0813 00:19:03.344419  854283 pod_ready.go:92] pod "kube-controller-manager-test-preload-20210813001622-820289" in "kube-system" namespace has status "Ready":"True"
	I0813 00:19:03.344445  854283 pod_ready.go:81] duration metric: took 400.462482ms waiting for pod "kube-controller-manager-test-preload-20210813001622-820289" in "kube-system" namespace to be "Ready" ...
	I0813 00:19:03.344460  854283 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h8m4f" in "kube-system" namespace to be "Ready" ...
	I0813 00:19:03.744704  854283 pod_ready.go:92] pod "kube-proxy-h8m4f" in "kube-system" namespace has status "Ready":"True"
	I0813 00:19:03.744731  854283 pod_ready.go:81] duration metric: took 400.259583ms waiting for pod "kube-proxy-h8m4f" in "kube-system" namespace to be "Ready" ...
	I0813 00:19:03.744741  854283 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-20210813001622-820289" in "kube-system" namespace to be "Ready" ...
	I0813 00:19:04.144772  854283 pod_ready.go:92] pod "kube-scheduler-test-preload-20210813001622-820289" in "kube-system" namespace has status "Ready":"True"
	I0813 00:19:04.144793  854283 pod_ready.go:81] duration metric: took 400.044359ms waiting for pod "kube-scheduler-test-preload-20210813001622-820289" in "kube-system" namespace to be "Ready" ...
	I0813 00:19:04.144804  854283 pod_ready.go:38] duration metric: took 3.344682496s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 00:19:04.144822  854283 api_server.go:50] waiting for apiserver process to appear ...
	I0813 00:19:04.144857  854283 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 00:19:04.155807  854283 api_server.go:70] duration metric: took 3.482278469s to wait for apiserver process to appear ...
	I0813 00:19:04.155823  854283 api_server.go:86] waiting for apiserver healthz status ...
	I0813 00:19:04.155832  854283 api_server.go:239] Checking apiserver healthz at https://192.168.39.8:8443/healthz ...
	I0813 00:19:04.162193  854283 api_server.go:265] https://192.168.39.8:8443/healthz returned 200:
	ok
	I0813 00:19:04.163194  854283 api_server.go:139] control plane version: v1.17.3
	I0813 00:19:04.163213  854283 api_server.go:129] duration metric: took 7.385599ms to wait for apiserver health ...
	I0813 00:19:04.163223  854283 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 00:19:04.346635  854283 system_pods.go:59] 7 kube-system pods found
	I0813 00:19:04.346659  854283 system_pods.go:61] "coredns-6955765f44-qd5cs" [ce64704b-5280-4603-9103-0fc8f906d6eb] Running
	I0813 00:19:04.346664  854283 system_pods.go:61] "etcd-test-preload-20210813001622-820289" [f6e89987-f8fa-410a-b70d-6b78603b469b] Running
	I0813 00:19:04.346668  854283 system_pods.go:61] "kube-apiserver-test-preload-20210813001622-820289" [fd4a4611-3c4e-4294-a625-8bba8706dde3] Running
	I0813 00:19:04.346672  854283 system_pods.go:61] "kube-controller-manager-test-preload-20210813001622-820289" [a97d0cb8-bf69-444c-9804-e784aceb1803] Running
	I0813 00:19:04.346675  854283 system_pods.go:61] "kube-proxy-h8m4f" [28c49088-079a-4a84-a17a-db12715f9314] Running
	I0813 00:19:04.346679  854283 system_pods.go:61] "kube-scheduler-test-preload-20210813001622-820289" [09483d5e-8196-46b5-831a-52b4198203d3] Running
	I0813 00:19:04.346683  854283 system_pods.go:61] "storage-provisioner" [7a0dcf9e-e408-42b1-b85a-5ad22cab3a5d] Running
	I0813 00:19:04.346689  854283 system_pods.go:74] duration metric: took 183.461145ms to wait for pod list to return data ...
	I0813 00:19:04.346695  854283 default_sa.go:34] waiting for default service account to be created ...
	I0813 00:19:04.548498  854283 default_sa.go:45] found service account: "default"
	I0813 00:19:04.548526  854283 default_sa.go:55] duration metric: took 201.815731ms for default service account to be created ...
	I0813 00:19:04.548537  854283 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 00:19:04.747346  854283 system_pods.go:86] 7 kube-system pods found
	I0813 00:19:04.747374  854283 system_pods.go:89] "coredns-6955765f44-qd5cs" [ce64704b-5280-4603-9103-0fc8f906d6eb] Running
	I0813 00:19:04.747380  854283 system_pods.go:89] "etcd-test-preload-20210813001622-820289" [f6e89987-f8fa-410a-b70d-6b78603b469b] Running
	I0813 00:19:04.747385  854283 system_pods.go:89] "kube-apiserver-test-preload-20210813001622-820289" [fd4a4611-3c4e-4294-a625-8bba8706dde3] Running
	I0813 00:19:04.747389  854283 system_pods.go:89] "kube-controller-manager-test-preload-20210813001622-820289" [a97d0cb8-bf69-444c-9804-e784aceb1803] Running
	I0813 00:19:04.747393  854283 system_pods.go:89] "kube-proxy-h8m4f" [28c49088-079a-4a84-a17a-db12715f9314] Running
	I0813 00:19:04.747397  854283 system_pods.go:89] "kube-scheduler-test-preload-20210813001622-820289" [09483d5e-8196-46b5-831a-52b4198203d3] Running
	I0813 00:19:04.747400  854283 system_pods.go:89] "storage-provisioner" [7a0dcf9e-e408-42b1-b85a-5ad22cab3a5d] Running
	I0813 00:19:04.747407  854283 system_pods.go:126] duration metric: took 198.864206ms to wait for k8s-apps to be running ...
	I0813 00:19:04.747414  854283 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 00:19:04.747456  854283 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 00:19:04.758781  854283 system_svc.go:56] duration metric: took 11.359633ms WaitForService to wait for kubelet.
	I0813 00:19:04.758799  854283 kubeadm.go:547] duration metric: took 4.085274237s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 00:19:04.758819  854283 node_conditions.go:102] verifying NodePressure condition ...
	I0813 00:19:04.944826  854283 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0813 00:19:04.944850  854283 node_conditions.go:123] node cpu capacity is 2
	I0813 00:19:04.944861  854283 node_conditions.go:105] duration metric: took 186.037547ms to run NodePressure ...
	I0813 00:19:04.944871  854283 start.go:231] waiting for startup goroutines ...
	I0813 00:19:04.986844  854283 start.go:462] kubectl: 1.20.5, cluster: 1.17.3 (minor skew: 3)
	I0813 00:19:04.989032  854283 out.go:177] 
	W0813 00:19:04.989185  854283 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.17.3.
	I0813 00:19:04.990536  854283 out.go:177]   - Want kubectl v1.17.3? Try 'minikube kubectl -- get pods -A'
	I0813 00:19:04.992101  854283 out.go:177] * Done! kubectl is now configured to use "test-preload-20210813001622-820289" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Fri 2021-08-13 00:16:36 UTC, end at Fri 2021-08-13 00:19:06 UTC. --
	Aug 13 00:19:05 test-preload-20210813001622-820289 crio[4606]: time="2021-08-13 00:19:05.694691942Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:70407277ccc874dbdd900052c593084620058700f10ae3827e3caf148651d7c8,PodSandboxId:db3f74f5f0ce4164b5f36fafad269291cbadd840c411461b50d79e5c00356eb3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:e5cfb363c7caf31fd5a2a3dbd37b6bfc96099ce20209de8e6f55e00ae7ff56c7,State:CONTAINER_RUNNING,CreatedAt:1628813937317913867,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8m4f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28c49088-079a-4a84-a17a-db12715f9314,},Annotations:map[string]string{io.kubernetes.container.hash: 158b70b2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e03eecdf80a35d3a26282e1513c019c6d523b8ce63829f8adb2caa265e373d4c,PodSandboxId:13548c1ef1814b42b406bed6c740596b923279393507df00a9ad455293f408dd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:1951ff35852d301efa66f57928db5d5caa654068e5c6ea4d7b1c039dad1000d2,State:CONTAINER_RUNNING,CreatedAt:1628813937146013249,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6955765f44-qd5cs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce64704b-5280-4603-9103-0fc8f906d6eb,},Annotations:map[string]string{io.kubernetes.container.hash: 69c2cb84,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6da6e0a8e361e2e27ae7f595c6d188231c051fb763827e5aa8cfce4d27fb341,PodSandboxId:c59c5ebdb47f76ad7bfa9d5622a11e7f59f15e2fddf2955984f1378d3f630c9f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1628813936780201966,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 7a0dcf9e-e408-42b1-b85a-5ad22cab3a5d,},Annotations:map[string]string{io.kubernetes.container.hash: b47cb70c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4be56999c331937233d0d3f01adccd72eaddc405b50614a557de59dc46434ad,PodSandboxId:efa7202207d08ec1ef3568792c24c37cf697d16b962fe6619b3d64efbc1265bd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b0f1517c1f4bb153597033d2efd81a9ac630e6a569307f993b2c0368afcf0302,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:4be02aa0403799a8de33152a8e495114960f0ace9c1dacebc52df978f58f3555,State:CONTAINER_RUNNING,CreatedAt:1628813930478209041,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-20210813001622-820289,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: c7178d8492f798ee160e507a1f6158eb,},Annotations:map[string]string{io.kubernetes.container.hash: 313be8e8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c67f89bfced3d3a7cd506afb33eec820b09d77bc9600ae3fea8a068ef26449c,PodSandboxId:18dd37e579160406058c5350a4e63710ee697336971fb248e78de98efc7f5774,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90d27391b7808cde8d9a81cfa43b1e81de5c4912b4b52a7dccb19eb4fe3c236b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:692cc58967c316106ce83ca6a5c6617c2226126d44931e9372450364ad64c97b,State:CONTAINER_RUNNING,CreatedAt:1628813930073822940,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-20210813001622-820289,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: d1bfeb2307572255a384a184e54ad026,},Annotations:map[string]string{io.kubernetes.container.hash: e4b527d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4dd93d454d7f602b97428ed87ed225090f2e549186f0d866f88aca8c03c99ec,PodSandboxId:aa78581be8a6304f1c8e0bf3c95b149bc3f2dba1e1125808f68e37d4391020cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d109c0821a2b9225b69b99a95000df5cd1de5d606bc187b3620d730d7769c6ad,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:f1b432e0ed98f3e2792cf81d6eaa62dc4bf37dd2efd9000b21d206e891ab1fb4,State:CONTAINER_RUNNING,CreatedAt:1628813929896318496,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-20210813001622-820289,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 29b5a3494fd7c53351d2b61e9b662a3a,},Annotations:map[string]string{io.kubernetes.container.hash: e3642d59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:225ac55e8ba48055318bf4a4769068fc5320868ce87efef344a41ff833b2afff,PodSandboxId:e28746aaa68fd1b227bf2c780578b35a1efa0c39623f1c492afac324e0b3bbc9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:cfbff4b3797c497fcb2b550d8ce345151f79f7e650212cd63bf2dc3cd5428274,State:CONTAINER_RUNNING,CreatedAt:1628813929223688025,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-20210813001622-820289,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: dcd7047348b95304f11961ab6d0a99de,},Annotations:map[string]string{io.kubernetes.container.hash: 1d5dd31d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2d595a01e7c1af4e0cd4816f9bbf733a0f2002cdd046c625ed5a27ca318b2e9,PodSandboxId:c59c5ebdb47f76ad7bfa9d5622a11e7f59f15e2fddf2955984f1378d3f630c9f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1628813893740727778,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a0dcf9e-e408-42b1-b85a-5ad22cab3a5d,},Ann
otations:map[string]string{io.kubernetes.container.hash: b47cb70c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:008b8de52185ba93c748e98111e4051dbef1a3adb3818923a8ad7a9aeb54ce09,PodSandboxId:db3f74f5f0ce4164b5f36fafad269291cbadd840c411461b50d79e5c00356eb3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,Annotations:map[string]string{},},ImageRef:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,State:CONTAINER_EXITED,CreatedAt:1628813892851612186,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8m4f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28c49088-079a-4a84-a17a-db12715f9314,},Annotations:map[string]string{io.kubernetes.container.hash: 158b70b2,io.k
ubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:062f5ddbf3e0e03651ca624abe77646e4edfecbd7795562e39cc8ae45cda3f04,PodSandboxId:13548c1ef1814b42b406bed6c740596b923279393507df00a9ad455293f408dd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,Annotations:map[string]string{},},ImageRef:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,State:CONTAINER_EXITED,CreatedAt:1628813892154884252,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6955765f44-qd5cs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce64704b-5280-4603-9103-0fc8f906d6eb,},Annotations:map[string]string{io.kubernetes.container.hash: 69c2cb84,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4a8133fd9ebe23ad9db2caf1d929ccb898d91af70632e15166a95a155bfaf8d,PodSandboxId:e28746aaa68fd1b227bf2c780578b35a1efa0c39623f1c492afac324e0b3bbc9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,State:CONTAINER_EXITED,CreatedAt:1628813868878634968,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-20210813001622-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcd7047348b95
304f11961ab6d0a99de,},Annotations:map[string]string{io.kubernetes.container.hash: 1d5dd31d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:803caa744a8e8cabf6837af54c4c11725a378fbbb06d219681a16f0281fcfcb3,PodSandboxId:385db44b38c8b7ee4afbd6f499b52d0fa97cdc5e3ebc5fbe7fc1f3ead5c3c1d9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056,Annotations:map[string]string{},},ImageRef:5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056,State:CONTAINER_EXITED,CreatedAt:1628813867535915151,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-20210813001622-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 603b914543a305bf066
dc8de01ce2232,},Annotations:map[string]string{io.kubernetes.container.hash: 589bcd22,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9a8455b064798364aded95d5385e3bcfb0aaab6c987fe33408efe4b9674ddfd,PodSandboxId:48fd9f641438f1b90b068df7be0e85b20aa2fd34a5b07eb19eb5234b609893b6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2,Annotations:map[string]string{},},ImageRef:0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2,State:CONTAINER_EXITED,CreatedAt:1628813867455597250,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-20210813001622-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bfcdeec3c584f675942a6a7a9b0f15d,},Annotations:map[s
tring]string{io.kubernetes.container.hash: a3c4bc57,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:820468f3c8190601605731e6567ffb6f24378dfd0a502efee0ddcba319b2c3b2,PodSandboxId:122a471892cdfac5b97bf97cf90111bbf7d79dce56963540560b2405194fd701,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28,Annotations:map[string]string{},},ImageRef:78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28,State:CONTAINER_EXITED,CreatedAt:1628813867276039825,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-20210813001622-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb577061a17ad23cfbbf52e9419bf32a,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 99930feb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=130d413f-5569-427a-86f2-e8617983d6ac name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:19:05 test-preload-20210813001622-820289 crio[4606]: time="2021-08-13 00:19:05.774009876Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=08a3a56e-5d12-4d17-ae6c-86dba6bff585 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:19:05 test-preload-20210813001622-820289 crio[4606]: time="2021-08-13 00:19:05.774074383Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=08a3a56e-5d12-4d17-ae6c-86dba6bff585 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:19:05 test-preload-20210813001622-820289 crio[4606]: time="2021-08-13 00:19:05.774377105Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:70407277ccc874dbdd900052c593084620058700f10ae3827e3caf148651d7c8,PodSandboxId:db3f74f5f0ce4164b5f36fafad269291cbadd840c411461b50d79e5c00356eb3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:e5cfb363c7caf31fd5a2a3dbd37b6bfc96099ce20209de8e6f55e00ae7ff56c7,State:CONTAINER_RUNNING,CreatedAt:1628813937317913867,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8m4f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28c49088-079a-4a84-a17a-db12715f9314,},Annotations:map[string]string{io.kubernetes.container.hash: 158b70b2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e03eecdf80a35d3a26282e1513c019c6d523b8ce63829f8adb2caa265e373d4c,PodSandboxId:13548c1ef1814b42b406bed6c740596b923279393507df00a9ad455293f408dd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:1951ff35852d301efa66f57928db5d5caa654068e5c6ea4d7b1c039dad1000d2,State:CONTAINER_RUNNING,CreatedAt:1628813937146013249,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6955765f44-qd5cs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce64704b-5280-4603-9103-0fc8f906d6eb,},Annotations:map[string]string{io.kubernetes.container.hash: 69c2cb84,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6da6e0a8e361e2e27ae7f595c6d188231c051fb763827e5aa8cfce4d27fb341,PodSandboxId:c59c5ebdb47f76ad7bfa9d5622a11e7f59f15e2fddf2955984f1378d3f630c9f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1628813936780201966,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 7a0dcf9e-e408-42b1-b85a-5ad22cab3a5d,},Annotations:map[string]string{io.kubernetes.container.hash: b47cb70c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4be56999c331937233d0d3f01adccd72eaddc405b50614a557de59dc46434ad,PodSandboxId:efa7202207d08ec1ef3568792c24c37cf697d16b962fe6619b3d64efbc1265bd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b0f1517c1f4bb153597033d2efd81a9ac630e6a569307f993b2c0368afcf0302,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:4be02aa0403799a8de33152a8e495114960f0ace9c1dacebc52df978f58f3555,State:CONTAINER_RUNNING,CreatedAt:1628813930478209041,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-20210813001622-820289,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: c7178d8492f798ee160e507a1f6158eb,},Annotations:map[string]string{io.kubernetes.container.hash: 313be8e8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c67f89bfced3d3a7cd506afb33eec820b09d77bc9600ae3fea8a068ef26449c,PodSandboxId:18dd37e579160406058c5350a4e63710ee697336971fb248e78de98efc7f5774,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90d27391b7808cde8d9a81cfa43b1e81de5c4912b4b52a7dccb19eb4fe3c236b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:692cc58967c316106ce83ca6a5c6617c2226126d44931e9372450364ad64c97b,State:CONTAINER_RUNNING,CreatedAt:1628813930073822940,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-20210813001622-820289,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: d1bfeb2307572255a384a184e54ad026,},Annotations:map[string]string{io.kubernetes.container.hash: e4b527d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4dd93d454d7f602b97428ed87ed225090f2e549186f0d866f88aca8c03c99ec,PodSandboxId:aa78581be8a6304f1c8e0bf3c95b149bc3f2dba1e1125808f68e37d4391020cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d109c0821a2b9225b69b99a95000df5cd1de5d606bc187b3620d730d7769c6ad,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:f1b432e0ed98f3e2792cf81d6eaa62dc4bf37dd2efd9000b21d206e891ab1fb4,State:CONTAINER_RUNNING,CreatedAt:1628813929896318496,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-20210813001622-820289,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 29b5a3494fd7c53351d2b61e9b662a3a,},Annotations:map[string]string{io.kubernetes.container.hash: e3642d59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:225ac55e8ba48055318bf4a4769068fc5320868ce87efef344a41ff833b2afff,PodSandboxId:e28746aaa68fd1b227bf2c780578b35a1efa0c39623f1c492afac324e0b3bbc9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:cfbff4b3797c497fcb2b550d8ce345151f79f7e650212cd63bf2dc3cd5428274,State:CONTAINER_RUNNING,CreatedAt:1628813929223688025,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-20210813001622-820289,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: dcd7047348b95304f11961ab6d0a99de,},Annotations:map[string]string{io.kubernetes.container.hash: 1d5dd31d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2d595a01e7c1af4e0cd4816f9bbf733a0f2002cdd046c625ed5a27ca318b2e9,PodSandboxId:c59c5ebdb47f76ad7bfa9d5622a11e7f59f15e2fddf2955984f1378d3f630c9f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1628813893740727778,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a0dcf9e-e408-42b1-b85a-5ad22cab3a5d,},Ann
otations:map[string]string{io.kubernetes.container.hash: b47cb70c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:008b8de52185ba93c748e98111e4051dbef1a3adb3818923a8ad7a9aeb54ce09,PodSandboxId:db3f74f5f0ce4164b5f36fafad269291cbadd840c411461b50d79e5c00356eb3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,Annotations:map[string]string{},},ImageRef:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,State:CONTAINER_EXITED,CreatedAt:1628813892851612186,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8m4f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28c49088-079a-4a84-a17a-db12715f9314,},Annotations:map[string]string{io.kubernetes.container.hash: 158b70b2,io.k
ubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:062f5ddbf3e0e03651ca624abe77646e4edfecbd7795562e39cc8ae45cda3f04,PodSandboxId:13548c1ef1814b42b406bed6c740596b923279393507df00a9ad455293f408dd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,Annotations:map[string]string{},},ImageRef:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,State:CONTAINER_EXITED,CreatedAt:1628813892154884252,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6955765f44-qd5cs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce64704b-5280-4603-9103-0fc8f906d6eb,},Annotations:map[string]string{io.kubernetes.container.hash: 69c2cb84,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4a8133fd9ebe23ad9db2caf1d929ccb898d91af70632e15166a95a155bfaf8d,PodSandboxId:e28746aaa68fd1b227bf2c780578b35a1efa0c39623f1c492afac324e0b3bbc9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,State:CONTAINER_EXITED,CreatedAt:1628813868878634968,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-20210813001622-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcd7047348b95
304f11961ab6d0a99de,},Annotations:map[string]string{io.kubernetes.container.hash: 1d5dd31d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:803caa744a8e8cabf6837af54c4c11725a378fbbb06d219681a16f0281fcfcb3,PodSandboxId:385db44b38c8b7ee4afbd6f499b52d0fa97cdc5e3ebc5fbe7fc1f3ead5c3c1d9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056,Annotations:map[string]string{},},ImageRef:5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056,State:CONTAINER_EXITED,CreatedAt:1628813867535915151,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-20210813001622-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 603b914543a305bf066
dc8de01ce2232,},Annotations:map[string]string{io.kubernetes.container.hash: 589bcd22,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9a8455b064798364aded95d5385e3bcfb0aaab6c987fe33408efe4b9674ddfd,PodSandboxId:48fd9f641438f1b90b068df7be0e85b20aa2fd34a5b07eb19eb5234b609893b6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2,Annotations:map[string]string{},},ImageRef:0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2,State:CONTAINER_EXITED,CreatedAt:1628813867455597250,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-20210813001622-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bfcdeec3c584f675942a6a7a9b0f15d,},Annotations:map[s
tring]string{io.kubernetes.container.hash: a3c4bc57,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:820468f3c8190601605731e6567ffb6f24378dfd0a502efee0ddcba319b2c3b2,PodSandboxId:122a471892cdfac5b97bf97cf90111bbf7d79dce56963540560b2405194fd701,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28,Annotations:map[string]string{},},ImageRef:78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28,State:CONTAINER_EXITED,CreatedAt:1628813867276039825,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-20210813001622-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb577061a17ad23cfbbf52e9419bf32a,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 99930feb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=08a3a56e-5d12-4d17-ae6c-86dba6bff585 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:19:05 test-preload-20210813001622-820289 crio[4606]: time="2021-08-13 00:19:05.826670330Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9ffc840a-7708-44c9-b70a-5035031c8074 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:19:05 test-preload-20210813001622-820289 crio[4606]: time="2021-08-13 00:19:05.826810152Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9ffc840a-7708-44c9-b70a-5035031c8074 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:19:05 test-preload-20210813001622-820289 crio[4606]: time="2021-08-13 00:19:05.827156404Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:70407277ccc874dbdd900052c593084620058700f10ae3827e3caf148651d7c8,PodSandboxId:db3f74f5f0ce4164b5f36fafad269291cbadd840c411461b50d79e5c00356eb3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:e5cfb363c7caf31fd5a2a3dbd37b6bfc96099ce20209de8e6f55e00ae7ff56c7,State:CONTAINER_RUNNING,CreatedAt:1628813937317913867,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8m4f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28c49088-079a-4a84-a17a-db12715f9314,},Annotations:map[string]string{io.kubernetes.container.hash: 158b70b2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e03eecdf80a35d3a26282e1513c019c6d523b8ce63829f8adb2caa265e373d4c,PodSandboxId:13548c1ef1814b42b406bed6c740596b923279393507df00a9ad455293f408dd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:1951ff35852d301efa66f57928db5d5caa654068e5c6ea4d7b1c039dad1000d2,State:CONTAINER_RUNNING,CreatedAt:1628813937146013249,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6955765f44-qd5cs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce64704b-5280-4603-9103-0fc8f906d6eb,},Annotations:map[string]string{io.kubernetes.container.hash: 69c2cb84,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6da6e0a8e361e2e27ae7f595c6d188231c051fb763827e5aa8cfce4d27fb341,PodSandboxId:c59c5ebdb47f76ad7bfa9d5622a11e7f59f15e2fddf2955984f1378d3f630c9f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1628813936780201966,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 7a0dcf9e-e408-42b1-b85a-5ad22cab3a5d,},Annotations:map[string]string{io.kubernetes.container.hash: b47cb70c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4be56999c331937233d0d3f01adccd72eaddc405b50614a557de59dc46434ad,PodSandboxId:efa7202207d08ec1ef3568792c24c37cf697d16b962fe6619b3d64efbc1265bd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b0f1517c1f4bb153597033d2efd81a9ac630e6a569307f993b2c0368afcf0302,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:4be02aa0403799a8de33152a8e495114960f0ace9c1dacebc52df978f58f3555,State:CONTAINER_RUNNING,CreatedAt:1628813930478209041,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-20210813001622-820289,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: c7178d8492f798ee160e507a1f6158eb,},Annotations:map[string]string{io.kubernetes.container.hash: 313be8e8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c67f89bfced3d3a7cd506afb33eec820b09d77bc9600ae3fea8a068ef26449c,PodSandboxId:18dd37e579160406058c5350a4e63710ee697336971fb248e78de98efc7f5774,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90d27391b7808cde8d9a81cfa43b1e81de5c4912b4b52a7dccb19eb4fe3c236b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:692cc58967c316106ce83ca6a5c6617c2226126d44931e9372450364ad64c97b,State:CONTAINER_RUNNING,CreatedAt:1628813930073822940,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-20210813001622-820289,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: d1bfeb2307572255a384a184e54ad026,},Annotations:map[string]string{io.kubernetes.container.hash: e4b527d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4dd93d454d7f602b97428ed87ed225090f2e549186f0d866f88aca8c03c99ec,PodSandboxId:aa78581be8a6304f1c8e0bf3c95b149bc3f2dba1e1125808f68e37d4391020cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d109c0821a2b9225b69b99a95000df5cd1de5d606bc187b3620d730d7769c6ad,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:f1b432e0ed98f3e2792cf81d6eaa62dc4bf37dd2efd9000b21d206e891ab1fb4,State:CONTAINER_RUNNING,CreatedAt:1628813929896318496,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-20210813001622-820289,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 29b5a3494fd7c53351d2b61e9b662a3a,},Annotations:map[string]string{io.kubernetes.container.hash: e3642d59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:225ac55e8ba48055318bf4a4769068fc5320868ce87efef344a41ff833b2afff,PodSandboxId:e28746aaa68fd1b227bf2c780578b35a1efa0c39623f1c492afac324e0b3bbc9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:cfbff4b3797c497fcb2b550d8ce345151f79f7e650212cd63bf2dc3cd5428274,State:CONTAINER_RUNNING,CreatedAt:1628813929223688025,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-20210813001622-820289,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: dcd7047348b95304f11961ab6d0a99de,},Annotations:map[string]string{io.kubernetes.container.hash: 1d5dd31d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2d595a01e7c1af4e0cd4816f9bbf733a0f2002cdd046c625ed5a27ca318b2e9,PodSandboxId:c59c5ebdb47f76ad7bfa9d5622a11e7f59f15e2fddf2955984f1378d3f630c9f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1628813893740727778,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a0dcf9e-e408-42b1-b85a-5ad22cab3a5d,},Ann
otations:map[string]string{io.kubernetes.container.hash: b47cb70c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:008b8de52185ba93c748e98111e4051dbef1a3adb3818923a8ad7a9aeb54ce09,PodSandboxId:db3f74f5f0ce4164b5f36fafad269291cbadd840c411461b50d79e5c00356eb3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,Annotations:map[string]string{},},ImageRef:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,State:CONTAINER_EXITED,CreatedAt:1628813892851612186,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8m4f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28c49088-079a-4a84-a17a-db12715f9314,},Annotations:map[string]string{io.kubernetes.container.hash: 158b70b2,io.k
ubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:062f5ddbf3e0e03651ca624abe77646e4edfecbd7795562e39cc8ae45cda3f04,PodSandboxId:13548c1ef1814b42b406bed6c740596b923279393507df00a9ad455293f408dd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,Annotations:map[string]string{},},ImageRef:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,State:CONTAINER_EXITED,CreatedAt:1628813892154884252,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6955765f44-qd5cs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce64704b-5280-4603-9103-0fc8f906d6eb,},Annotations:map[string]string{io.kubernetes.container.hash: 69c2cb84,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4a8133fd9ebe23ad9db2caf1d929ccb898d91af70632e15166a95a155bfaf8d,PodSandboxId:e28746aaa68fd1b227bf2c780578b35a1efa0c39623f1c492afac324e0b3bbc9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,State:CONTAINER_EXITED,CreatedAt:1628813868878634968,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-20210813001622-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcd7047348b95
304f11961ab6d0a99de,},Annotations:map[string]string{io.kubernetes.container.hash: 1d5dd31d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:803caa744a8e8cabf6837af54c4c11725a378fbbb06d219681a16f0281fcfcb3,PodSandboxId:385db44b38c8b7ee4afbd6f499b52d0fa97cdc5e3ebc5fbe7fc1f3ead5c3c1d9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056,Annotations:map[string]string{},},ImageRef:5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056,State:CONTAINER_EXITED,CreatedAt:1628813867535915151,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-20210813001622-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 603b914543a305bf066
dc8de01ce2232,},Annotations:map[string]string{io.kubernetes.container.hash: 589bcd22,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9a8455b064798364aded95d5385e3bcfb0aaab6c987fe33408efe4b9674ddfd,PodSandboxId:48fd9f641438f1b90b068df7be0e85b20aa2fd34a5b07eb19eb5234b609893b6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2,Annotations:map[string]string{},},ImageRef:0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2,State:CONTAINER_EXITED,CreatedAt:1628813867455597250,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-20210813001622-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bfcdeec3c584f675942a6a7a9b0f15d,},Annotations:map[s
tring]string{io.kubernetes.container.hash: a3c4bc57,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:820468f3c8190601605731e6567ffb6f24378dfd0a502efee0ddcba319b2c3b2,PodSandboxId:122a471892cdfac5b97bf97cf90111bbf7d79dce56963540560b2405194fd701,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28,Annotations:map[string]string{},},ImageRef:78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28,State:CONTAINER_EXITED,CreatedAt:1628813867276039825,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-20210813001622-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb577061a17ad23cfbbf52e9419bf32a,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 99930feb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9ffc840a-7708-44c9-b70a-5035031c8074 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:19:05 test-preload-20210813001622-820289 crio[4606]: time="2021-08-13 00:19:05.870869384Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=778ba53d-922b-4092-a0c9-aa04ce2b7a53 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:19:05 test-preload-20210813001622-820289 crio[4606]: time="2021-08-13 00:19:05.871204116Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=778ba53d-922b-4092-a0c9-aa04ce2b7a53 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:19:05 test-preload-20210813001622-820289 crio[4606]: time="2021-08-13 00:19:05.872301512Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:70407277ccc874dbdd900052c593084620058700f10ae3827e3caf148651d7c8,PodSandboxId:db3f74f5f0ce4164b5f36fafad269291cbadd840c411461b50d79e5c00356eb3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:e5cfb363c7caf31fd5a2a3dbd37b6bfc96099ce20209de8e6f55e00ae7ff56c7,State:CONTAINER_RUNNING,CreatedAt:1628813937317913867,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8m4f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28c49088-079a-4a84-a17a-db12715f9314,},Annotations:map[string]string{io.kubernetes.container.hash: 158b70b2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e03eecdf80a35d3a26282e1513c019c6d523b8ce63829f8adb2caa265e373d4c,PodSandboxId:13548c1ef1814b42b406bed6c740596b923279393507df00a9ad455293f408dd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:1951ff35852d301efa66f57928db5d5caa654068e5c6ea4d7b1c039dad1000d2,State:CONTAINER_RUNNING,CreatedAt:1628813937146013249,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6955765f44-qd5cs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce64704b-5280-4603-9103-0fc8f906d6eb,},Annotations:map[string]string{io.kubernetes.container.hash: 69c2cb84,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6da6e0a8e361e2e27ae7f595c6d188231c051fb763827e5aa8cfce4d27fb341,PodSandboxId:c59c5ebdb47f76ad7bfa9d5622a11e7f59f15e2fddf2955984f1378d3f630c9f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1628813936780201966,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 7a0dcf9e-e408-42b1-b85a-5ad22cab3a5d,},Annotations:map[string]string{io.kubernetes.container.hash: b47cb70c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4be56999c331937233d0d3f01adccd72eaddc405b50614a557de59dc46434ad,PodSandboxId:efa7202207d08ec1ef3568792c24c37cf697d16b962fe6619b3d64efbc1265bd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b0f1517c1f4bb153597033d2efd81a9ac630e6a569307f993b2c0368afcf0302,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:4be02aa0403799a8de33152a8e495114960f0ace9c1dacebc52df978f58f3555,State:CONTAINER_RUNNING,CreatedAt:1628813930478209041,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-20210813001622-820289,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: c7178d8492f798ee160e507a1f6158eb,},Annotations:map[string]string{io.kubernetes.container.hash: 313be8e8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c67f89bfced3d3a7cd506afb33eec820b09d77bc9600ae3fea8a068ef26449c,PodSandboxId:18dd37e579160406058c5350a4e63710ee697336971fb248e78de98efc7f5774,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90d27391b7808cde8d9a81cfa43b1e81de5c4912b4b52a7dccb19eb4fe3c236b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:692cc58967c316106ce83ca6a5c6617c2226126d44931e9372450364ad64c97b,State:CONTAINER_RUNNING,CreatedAt:1628813930073822940,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-20210813001622-820289,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: d1bfeb2307572255a384a184e54ad026,},Annotations:map[string]string{io.kubernetes.container.hash: e4b527d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4dd93d454d7f602b97428ed87ed225090f2e549186f0d866f88aca8c03c99ec,PodSandboxId:aa78581be8a6304f1c8e0bf3c95b149bc3f2dba1e1125808f68e37d4391020cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d109c0821a2b9225b69b99a95000df5cd1de5d606bc187b3620d730d7769c6ad,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:f1b432e0ed98f3e2792cf81d6eaa62dc4bf37dd2efd9000b21d206e891ab1fb4,State:CONTAINER_RUNNING,CreatedAt:1628813929896318496,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-20210813001622-820289,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 29b5a3494fd7c53351d2b61e9b662a3a,},Annotations:map[string]string{io.kubernetes.container.hash: e3642d59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:225ac55e8ba48055318bf4a4769068fc5320868ce87efef344a41ff833b2afff,PodSandboxId:e28746aaa68fd1b227bf2c780578b35a1efa0c39623f1c492afac324e0b3bbc9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:cfbff4b3797c497fcb2b550d8ce345151f79f7e650212cd63bf2dc3cd5428274,State:CONTAINER_RUNNING,CreatedAt:1628813929223688025,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-20210813001622-820289,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: dcd7047348b95304f11961ab6d0a99de,},Annotations:map[string]string{io.kubernetes.container.hash: 1d5dd31d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2d595a01e7c1af4e0cd4816f9bbf733a0f2002cdd046c625ed5a27ca318b2e9,PodSandboxId:c59c5ebdb47f76ad7bfa9d5622a11e7f59f15e2fddf2955984f1378d3f630c9f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1628813893740727778,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a0dcf9e-e408-42b1-b85a-5ad22cab3a5d,},Ann
otations:map[string]string{io.kubernetes.container.hash: b47cb70c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:008b8de52185ba93c748e98111e4051dbef1a3adb3818923a8ad7a9aeb54ce09,PodSandboxId:db3f74f5f0ce4164b5f36fafad269291cbadd840c411461b50d79e5c00356eb3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,Annotations:map[string]string{},},ImageRef:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,State:CONTAINER_EXITED,CreatedAt:1628813892851612186,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8m4f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28c49088-079a-4a84-a17a-db12715f9314,},Annotations:map[string]string{io.kubernetes.container.hash: 158b70b2,io.k
ubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:062f5ddbf3e0e03651ca624abe77646e4edfecbd7795562e39cc8ae45cda3f04,PodSandboxId:13548c1ef1814b42b406bed6c740596b923279393507df00a9ad455293f408dd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,Annotations:map[string]string{},},ImageRef:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,State:CONTAINER_EXITED,CreatedAt:1628813892154884252,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6955765f44-qd5cs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce64704b-5280-4603-9103-0fc8f906d6eb,},Annotations:map[string]string{io.kubernetes.container.hash: 69c2cb84,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4a8133fd9ebe23ad9db2caf1d929ccb898d91af70632e15166a95a155bfaf8d,PodSandboxId:e28746aaa68fd1b227bf2c780578b35a1efa0c39623f1c492afac324e0b3bbc9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,State:CONTAINER_EXITED,CreatedAt:1628813868878634968,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-20210813001622-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcd7047348b95
304f11961ab6d0a99de,},Annotations:map[string]string{io.kubernetes.container.hash: 1d5dd31d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:803caa744a8e8cabf6837af54c4c11725a378fbbb06d219681a16f0281fcfcb3,PodSandboxId:385db44b38c8b7ee4afbd6f499b52d0fa97cdc5e3ebc5fbe7fc1f3ead5c3c1d9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056,Annotations:map[string]string{},},ImageRef:5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056,State:CONTAINER_EXITED,CreatedAt:1628813867535915151,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-20210813001622-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 603b914543a305bf066
dc8de01ce2232,},Annotations:map[string]string{io.kubernetes.container.hash: 589bcd22,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9a8455b064798364aded95d5385e3bcfb0aaab6c987fe33408efe4b9674ddfd,PodSandboxId:48fd9f641438f1b90b068df7be0e85b20aa2fd34a5b07eb19eb5234b609893b6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2,Annotations:map[string]string{},},ImageRef:0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2,State:CONTAINER_EXITED,CreatedAt:1628813867455597250,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-20210813001622-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bfcdeec3c584f675942a6a7a9b0f15d,},Annotations:map[s
tring]string{io.kubernetes.container.hash: a3c4bc57,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:820468f3c8190601605731e6567ffb6f24378dfd0a502efee0ddcba319b2c3b2,PodSandboxId:122a471892cdfac5b97bf97cf90111bbf7d79dce56963540560b2405194fd701,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28,Annotations:map[string]string{},},ImageRef:78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28,State:CONTAINER_EXITED,CreatedAt:1628813867276039825,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-20210813001622-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb577061a17ad23cfbbf52e9419bf32a,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 99930feb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=778ba53d-922b-4092-a0c9-aa04ce2b7a53 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:19:05 test-preload-20210813001622-820289 crio[4606]: time="2021-08-13 00:19:05.916900110Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=38304cfd-f85e-4936-a2f6-25669964a948 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:19:05 test-preload-20210813001622-820289 crio[4606]: time="2021-08-13 00:19:05.916951449Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=38304cfd-f85e-4936-a2f6-25669964a948 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:19:05 test-preload-20210813001622-820289 crio[4606]: time="2021-08-13 00:19:05.917210219Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:70407277ccc874dbdd900052c593084620058700f10ae3827e3caf148651d7c8,PodSandboxId:db3f74f5f0ce4164b5f36fafad269291cbadd840c411461b50d79e5c00356eb3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:e5cfb363c7caf31fd5a2a3dbd37b6bfc96099ce20209de8e6f55e00ae7ff56c7,State:CONTAINER_RUNNING,CreatedAt:1628813937317913867,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8m4f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28c49088-079a-4a84-a17a-db12715f9314,},Annotations:map[string]string{io.kubernetes.container.hash: 158b70b2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e03eecdf80a35d3a26282e1513c019c6d523b8ce63829f8adb2caa265e373d4c,PodSandboxId:13548c1ef1814b42b406bed6c740596b923279393507df00a9ad455293f408dd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:1951ff35852d301efa66f57928db5d5caa654068e5c6ea4d7b1c039dad1000d2,State:CONTAINER_RUNNING,CreatedAt:1628813937146013249,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6955765f44-qd5cs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce64704b-5280-4603-9103-0fc8f906d6eb,},Annotations:map[string]string{io.kubernetes.container.hash: 69c2cb84,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6da6e0a8e361e2e27ae7f595c6d188231c051fb763827e5aa8cfce4d27fb341,PodSandboxId:c59c5ebdb47f76ad7bfa9d5622a11e7f59f15e2fddf2955984f1378d3f630c9f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1628813936780201966,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 7a0dcf9e-e408-42b1-b85a-5ad22cab3a5d,},Annotations:map[string]string{io.kubernetes.container.hash: b47cb70c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4be56999c331937233d0d3f01adccd72eaddc405b50614a557de59dc46434ad,PodSandboxId:efa7202207d08ec1ef3568792c24c37cf697d16b962fe6619b3d64efbc1265bd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b0f1517c1f4bb153597033d2efd81a9ac630e6a569307f993b2c0368afcf0302,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:4be02aa0403799a8de33152a8e495114960f0ace9c1dacebc52df978f58f3555,State:CONTAINER_RUNNING,CreatedAt:1628813930478209041,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-20210813001622-820289,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: c7178d8492f798ee160e507a1f6158eb,},Annotations:map[string]string{io.kubernetes.container.hash: 313be8e8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c67f89bfced3d3a7cd506afb33eec820b09d77bc9600ae3fea8a068ef26449c,PodSandboxId:18dd37e579160406058c5350a4e63710ee697336971fb248e78de98efc7f5774,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90d27391b7808cde8d9a81cfa43b1e81de5c4912b4b52a7dccb19eb4fe3c236b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:692cc58967c316106ce83ca6a5c6617c2226126d44931e9372450364ad64c97b,State:CONTAINER_RUNNING,CreatedAt:1628813930073822940,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-20210813001622-820289,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: d1bfeb2307572255a384a184e54ad026,},Annotations:map[string]string{io.kubernetes.container.hash: e4b527d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4dd93d454d7f602b97428ed87ed225090f2e549186f0d866f88aca8c03c99ec,PodSandboxId:aa78581be8a6304f1c8e0bf3c95b149bc3f2dba1e1125808f68e37d4391020cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d109c0821a2b9225b69b99a95000df5cd1de5d606bc187b3620d730d7769c6ad,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:f1b432e0ed98f3e2792cf81d6eaa62dc4bf37dd2efd9000b21d206e891ab1fb4,State:CONTAINER_RUNNING,CreatedAt:1628813929896318496,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-20210813001622-820289,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 29b5a3494fd7c53351d2b61e9b662a3a,},Annotations:map[string]string{io.kubernetes.container.hash: e3642d59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:225ac55e8ba48055318bf4a4769068fc5320868ce87efef344a41ff833b2afff,PodSandboxId:e28746aaa68fd1b227bf2c780578b35a1efa0c39623f1c492afac324e0b3bbc9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:cfbff4b3797c497fcb2b550d8ce345151f79f7e650212cd63bf2dc3cd5428274,State:CONTAINER_RUNNING,CreatedAt:1628813929223688025,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-20210813001622-820289,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: dcd7047348b95304f11961ab6d0a99de,},Annotations:map[string]string{io.kubernetes.container.hash: 1d5dd31d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2d595a01e7c1af4e0cd4816f9bbf733a0f2002cdd046c625ed5a27ca318b2e9,PodSandboxId:c59c5ebdb47f76ad7bfa9d5622a11e7f59f15e2fddf2955984f1378d3f630c9f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1628813893740727778,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a0dcf9e-e408-42b1-b85a-5ad22cab3a5d,},Ann
otations:map[string]string{io.kubernetes.container.hash: b47cb70c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:008b8de52185ba93c748e98111e4051dbef1a3adb3818923a8ad7a9aeb54ce09,PodSandboxId:db3f74f5f0ce4164b5f36fafad269291cbadd840c411461b50d79e5c00356eb3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,Annotations:map[string]string{},},ImageRef:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,State:CONTAINER_EXITED,CreatedAt:1628813892851612186,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8m4f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28c49088-079a-4a84-a17a-db12715f9314,},Annotations:map[string]string{io.kubernetes.container.hash: 158b70b2,io.k
ubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:062f5ddbf3e0e03651ca624abe77646e4edfecbd7795562e39cc8ae45cda3f04,PodSandboxId:13548c1ef1814b42b406bed6c740596b923279393507df00a9ad455293f408dd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,Annotations:map[string]string{},},ImageRef:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,State:CONTAINER_EXITED,CreatedAt:1628813892154884252,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6955765f44-qd5cs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce64704b-5280-4603-9103-0fc8f906d6eb,},Annotations:map[string]string{io.kubernetes.container.hash: 69c2cb84,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4a8133fd9ebe23ad9db2caf1d929ccb898d91af70632e15166a95a155bfaf8d,PodSandboxId:e28746aaa68fd1b227bf2c780578b35a1efa0c39623f1c492afac324e0b3bbc9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,State:CONTAINER_EXITED,CreatedAt:1628813868878634968,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-20210813001622-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcd7047348b95
304f11961ab6d0a99de,},Annotations:map[string]string{io.kubernetes.container.hash: 1d5dd31d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:803caa744a8e8cabf6837af54c4c11725a378fbbb06d219681a16f0281fcfcb3,PodSandboxId:385db44b38c8b7ee4afbd6f499b52d0fa97cdc5e3ebc5fbe7fc1f3ead5c3c1d9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056,Annotations:map[string]string{},},ImageRef:5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056,State:CONTAINER_EXITED,CreatedAt:1628813867535915151,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-20210813001622-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 603b914543a305bf066
dc8de01ce2232,},Annotations:map[string]string{io.kubernetes.container.hash: 589bcd22,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9a8455b064798364aded95d5385e3bcfb0aaab6c987fe33408efe4b9674ddfd,PodSandboxId:48fd9f641438f1b90b068df7be0e85b20aa2fd34a5b07eb19eb5234b609893b6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2,Annotations:map[string]string{},},ImageRef:0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2,State:CONTAINER_EXITED,CreatedAt:1628813867455597250,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-20210813001622-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bfcdeec3c584f675942a6a7a9b0f15d,},Annotations:map[s
tring]string{io.kubernetes.container.hash: a3c4bc57,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:820468f3c8190601605731e6567ffb6f24378dfd0a502efee0ddcba319b2c3b2,PodSandboxId:122a471892cdfac5b97bf97cf90111bbf7d79dce56963540560b2405194fd701,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28,Annotations:map[string]string{},},ImageRef:78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28,State:CONTAINER_EXITED,CreatedAt:1628813867276039825,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-20210813001622-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb577061a17ad23cfbbf52e9419bf32a,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 99930feb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=38304cfd-f85e-4936-a2f6-25669964a948 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:19:05 test-preload-20210813001622-820289 crio[4606]: time="2021-08-13 00:19:05.959664460Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=03b7540e-9610-4f49-8349-2b89a3b477bd name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:19:05 test-preload-20210813001622-820289 crio[4606]: time="2021-08-13 00:19:05.959891111Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=03b7540e-9610-4f49-8349-2b89a3b477bd name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:19:05 test-preload-20210813001622-820289 crio[4606]: time="2021-08-13 00:19:05.960886476Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:70407277ccc874dbdd900052c593084620058700f10ae3827e3caf148651d7c8,PodSandboxId:db3f74f5f0ce4164b5f36fafad269291cbadd840c411461b50d79e5c00356eb3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:e5cfb363c7caf31fd5a2a3dbd37b6bfc96099ce20209de8e6f55e00ae7ff56c7,State:CONTAINER_RUNNING,CreatedAt:1628813937317913867,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8m4f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28c49088-079a-4a84-a17a-db12715f9314,},Annotations:map[string]string{io.kubernetes.container.hash: 158b70b2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e03eecdf80a35d3a26282e1513c019c6d523b8ce63829f8adb2caa265e373d4c,PodSandboxId:13548c1ef1814b42b406bed6c740596b923279393507df00a9ad455293f408dd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:1951ff35852d301efa66f57928db5d5caa654068e5c6ea4d7b1c039dad1000d2,State:CONTAINER_RUNNING,CreatedAt:1628813937146013249,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6955765f44-qd5cs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce64704b-5280-4603-9103-0fc8f906d6eb,},Annotations:map[string]string{io.kubernetes.container.hash: 69c2cb84,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6da6e0a8e361e2e27ae7f595c6d188231c051fb763827e5aa8cfce4d27fb341,PodSandboxId:c59c5ebdb47f76ad7bfa9d5622a11e7f59f15e2fddf2955984f1378d3f630c9f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1628813936780201966,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 7a0dcf9e-e408-42b1-b85a-5ad22cab3a5d,},Annotations:map[string]string{io.kubernetes.container.hash: b47cb70c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4be56999c331937233d0d3f01adccd72eaddc405b50614a557de59dc46434ad,PodSandboxId:efa7202207d08ec1ef3568792c24c37cf697d16b962fe6619b3d64efbc1265bd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b0f1517c1f4bb153597033d2efd81a9ac630e6a569307f993b2c0368afcf0302,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:4be02aa0403799a8de33152a8e495114960f0ace9c1dacebc52df978f58f3555,State:CONTAINER_RUNNING,CreatedAt:1628813930478209041,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-20210813001622-820289,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: c7178d8492f798ee160e507a1f6158eb,},Annotations:map[string]string{io.kubernetes.container.hash: 313be8e8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c67f89bfced3d3a7cd506afb33eec820b09d77bc9600ae3fea8a068ef26449c,PodSandboxId:18dd37e579160406058c5350a4e63710ee697336971fb248e78de98efc7f5774,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90d27391b7808cde8d9a81cfa43b1e81de5c4912b4b52a7dccb19eb4fe3c236b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:692cc58967c316106ce83ca6a5c6617c2226126d44931e9372450364ad64c97b,State:CONTAINER_RUNNING,CreatedAt:1628813930073822940,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-20210813001622-820289,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: d1bfeb2307572255a384a184e54ad026,},Annotations:map[string]string{io.kubernetes.container.hash: e4b527d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4dd93d454d7f602b97428ed87ed225090f2e549186f0d866f88aca8c03c99ec,PodSandboxId:aa78581be8a6304f1c8e0bf3c95b149bc3f2dba1e1125808f68e37d4391020cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d109c0821a2b9225b69b99a95000df5cd1de5d606bc187b3620d730d7769c6ad,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:f1b432e0ed98f3e2792cf81d6eaa62dc4bf37dd2efd9000b21d206e891ab1fb4,State:CONTAINER_RUNNING,CreatedAt:1628813929896318496,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-20210813001622-820289,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 29b5a3494fd7c53351d2b61e9b662a3a,},Annotations:map[string]string{io.kubernetes.container.hash: e3642d59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:225ac55e8ba48055318bf4a4769068fc5320868ce87efef344a41ff833b2afff,PodSandboxId:e28746aaa68fd1b227bf2c780578b35a1efa0c39623f1c492afac324e0b3bbc9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:cfbff4b3797c497fcb2b550d8ce345151f79f7e650212cd63bf2dc3cd5428274,State:CONTAINER_RUNNING,CreatedAt:1628813929223688025,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-20210813001622-820289,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: dcd7047348b95304f11961ab6d0a99de,},Annotations:map[string]string{io.kubernetes.container.hash: 1d5dd31d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2d595a01e7c1af4e0cd4816f9bbf733a0f2002cdd046c625ed5a27ca318b2e9,PodSandboxId:c59c5ebdb47f76ad7bfa9d5622a11e7f59f15e2fddf2955984f1378d3f630c9f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1628813893740727778,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a0dcf9e-e408-42b1-b85a-5ad22cab3a5d,},Ann
otations:map[string]string{io.kubernetes.container.hash: b47cb70c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:008b8de52185ba93c748e98111e4051dbef1a3adb3818923a8ad7a9aeb54ce09,PodSandboxId:db3f74f5f0ce4164b5f36fafad269291cbadd840c411461b50d79e5c00356eb3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,Annotations:map[string]string{},},ImageRef:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,State:CONTAINER_EXITED,CreatedAt:1628813892851612186,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8m4f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28c49088-079a-4a84-a17a-db12715f9314,},Annotations:map[string]string{io.kubernetes.container.hash: 158b70b2,io.k
ubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:062f5ddbf3e0e03651ca624abe77646e4edfecbd7795562e39cc8ae45cda3f04,PodSandboxId:13548c1ef1814b42b406bed6c740596b923279393507df00a9ad455293f408dd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,Annotations:map[string]string{},},ImageRef:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,State:CONTAINER_EXITED,CreatedAt:1628813892154884252,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6955765f44-qd5cs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce64704b-5280-4603-9103-0fc8f906d6eb,},Annotations:map[string]string{io.kubernetes.container.hash: 69c2cb84,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4a8133fd9ebe23ad9db2caf1d929ccb898d91af70632e15166a95a155bfaf8d,PodSandboxId:e28746aaa68fd1b227bf2c780578b35a1efa0c39623f1c492afac324e0b3bbc9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,State:CONTAINER_EXITED,CreatedAt:1628813868878634968,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-20210813001622-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcd7047348b95
304f11961ab6d0a99de,},Annotations:map[string]string{io.kubernetes.container.hash: 1d5dd31d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:803caa744a8e8cabf6837af54c4c11725a378fbbb06d219681a16f0281fcfcb3,PodSandboxId:385db44b38c8b7ee4afbd6f499b52d0fa97cdc5e3ebc5fbe7fc1f3ead5c3c1d9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056,Annotations:map[string]string{},},ImageRef:5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056,State:CONTAINER_EXITED,CreatedAt:1628813867535915151,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-20210813001622-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 603b914543a305bf066
dc8de01ce2232,},Annotations:map[string]string{io.kubernetes.container.hash: 589bcd22,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9a8455b064798364aded95d5385e3bcfb0aaab6c987fe33408efe4b9674ddfd,PodSandboxId:48fd9f641438f1b90b068df7be0e85b20aa2fd34a5b07eb19eb5234b609893b6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2,Annotations:map[string]string{},},ImageRef:0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2,State:CONTAINER_EXITED,CreatedAt:1628813867455597250,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-20210813001622-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bfcdeec3c584f675942a6a7a9b0f15d,},Annotations:map[s
tring]string{io.kubernetes.container.hash: a3c4bc57,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:820468f3c8190601605731e6567ffb6f24378dfd0a502efee0ddcba319b2c3b2,PodSandboxId:122a471892cdfac5b97bf97cf90111bbf7d79dce56963540560b2405194fd701,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28,Annotations:map[string]string{},},ImageRef:78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28,State:CONTAINER_EXITED,CreatedAt:1628813867276039825,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-20210813001622-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb577061a17ad23cfbbf52e9419bf32a,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 99930feb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=03b7540e-9610-4f49-8349-2b89a3b477bd name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:19:06 test-preload-20210813001622-820289 crio[4606]: time="2021-08-13 00:19:06.001624641Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4173a91e-aaef-4cc8-b640-164642fcb5b6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:19:06 test-preload-20210813001622-820289 crio[4606]: time="2021-08-13 00:19:06.001732942Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4173a91e-aaef-4cc8-b640-164642fcb5b6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:19:06 test-preload-20210813001622-820289 crio[4606]: time="2021-08-13 00:19:06.002785933Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:70407277ccc874dbdd900052c593084620058700f10ae3827e3caf148651d7c8,PodSandboxId:db3f74f5f0ce4164b5f36fafad269291cbadd840c411461b50d79e5c00356eb3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:e5cfb363c7caf31fd5a2a3dbd37b6bfc96099ce20209de8e6f55e00ae7ff56c7,State:CONTAINER_RUNNING,CreatedAt:1628813937317913867,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8m4f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28c49088-079a-4a84-a17a-db12715f9314,},Annotations:map[string]string{io.kubernetes.container.hash: 158b70b2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e03eecdf80a35d3a26282e1513c019c6d523b8ce63829f8adb2caa265e373d4c,PodSandboxId:13548c1ef1814b42b406bed6c740596b923279393507df00a9ad455293f408dd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:1951ff35852d301efa66f57928db5d5caa654068e5c6ea4d7b1c039dad1000d2,State:CONTAINER_RUNNING,CreatedAt:1628813937146013249,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6955765f44-qd5cs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce64704b-5280-4603-9103-0fc8f906d6eb,},Annotations:map[string]string{io.kubernetes.container.hash: 69c2cb84,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6da6e0a8e361e2e27ae7f595c6d188231c051fb763827e5aa8cfce4d27fb341,PodSandboxId:c59c5ebdb47f76ad7bfa9d5622a11e7f59f15e2fddf2955984f1378d3f630c9f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1628813936780201966,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 7a0dcf9e-e408-42b1-b85a-5ad22cab3a5d,},Annotations:map[string]string{io.kubernetes.container.hash: b47cb70c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4be56999c331937233d0d3f01adccd72eaddc405b50614a557de59dc46434ad,PodSandboxId:efa7202207d08ec1ef3568792c24c37cf697d16b962fe6619b3d64efbc1265bd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b0f1517c1f4bb153597033d2efd81a9ac630e6a569307f993b2c0368afcf0302,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:4be02aa0403799a8de33152a8e495114960f0ace9c1dacebc52df978f58f3555,State:CONTAINER_RUNNING,CreatedAt:1628813930478209041,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-20210813001622-820289,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: c7178d8492f798ee160e507a1f6158eb,},Annotations:map[string]string{io.kubernetes.container.hash: 313be8e8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c67f89bfced3d3a7cd506afb33eec820b09d77bc9600ae3fea8a068ef26449c,PodSandboxId:18dd37e579160406058c5350a4e63710ee697336971fb248e78de98efc7f5774,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90d27391b7808cde8d9a81cfa43b1e81de5c4912b4b52a7dccb19eb4fe3c236b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:692cc58967c316106ce83ca6a5c6617c2226126d44931e9372450364ad64c97b,State:CONTAINER_RUNNING,CreatedAt:1628813930073822940,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-20210813001622-820289,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: d1bfeb2307572255a384a184e54ad026,},Annotations:map[string]string{io.kubernetes.container.hash: e4b527d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4dd93d454d7f602b97428ed87ed225090f2e549186f0d866f88aca8c03c99ec,PodSandboxId:aa78581be8a6304f1c8e0bf3c95b149bc3f2dba1e1125808f68e37d4391020cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d109c0821a2b9225b69b99a95000df5cd1de5d606bc187b3620d730d7769c6ad,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:f1b432e0ed98f3e2792cf81d6eaa62dc4bf37dd2efd9000b21d206e891ab1fb4,State:CONTAINER_RUNNING,CreatedAt:1628813929896318496,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-20210813001622-820289,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 29b5a3494fd7c53351d2b61e9b662a3a,},Annotations:map[string]string{io.kubernetes.container.hash: e3642d59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:225ac55e8ba48055318bf4a4769068fc5320868ce87efef344a41ff833b2afff,PodSandboxId:e28746aaa68fd1b227bf2c780578b35a1efa0c39623f1c492afac324e0b3bbc9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:cfbff4b3797c497fcb2b550d8ce345151f79f7e650212cd63bf2dc3cd5428274,State:CONTAINER_RUNNING,CreatedAt:1628813929223688025,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-20210813001622-820289,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: dcd7047348b95304f11961ab6d0a99de,},Annotations:map[string]string{io.kubernetes.container.hash: 1d5dd31d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2d595a01e7c1af4e0cd4816f9bbf733a0f2002cdd046c625ed5a27ca318b2e9,PodSandboxId:c59c5ebdb47f76ad7bfa9d5622a11e7f59f15e2fddf2955984f1378d3f630c9f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1628813893740727778,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a0dcf9e-e408-42b1-b85a-5ad22cab3a5d,},Ann
otations:map[string]string{io.kubernetes.container.hash: b47cb70c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:008b8de52185ba93c748e98111e4051dbef1a3adb3818923a8ad7a9aeb54ce09,PodSandboxId:db3f74f5f0ce4164b5f36fafad269291cbadd840c411461b50d79e5c00356eb3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,Annotations:map[string]string{},},ImageRef:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,State:CONTAINER_EXITED,CreatedAt:1628813892851612186,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8m4f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28c49088-079a-4a84-a17a-db12715f9314,},Annotations:map[string]string{io.kubernetes.container.hash: 158b70b2,io.k
ubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:062f5ddbf3e0e03651ca624abe77646e4edfecbd7795562e39cc8ae45cda3f04,PodSandboxId:13548c1ef1814b42b406bed6c740596b923279393507df00a9ad455293f408dd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,Annotations:map[string]string{},},ImageRef:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,State:CONTAINER_EXITED,CreatedAt:1628813892154884252,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6955765f44-qd5cs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce64704b-5280-4603-9103-0fc8f906d6eb,},Annotations:map[string]string{io.kubernetes.container.hash: 69c2cb84,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4a8133fd9ebe23ad9db2caf1d929ccb898d91af70632e15166a95a155bfaf8d,PodSandboxId:e28746aaa68fd1b227bf2c780578b35a1efa0c39623f1c492afac324e0b3bbc9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,State:CONTAINER_EXITED,CreatedAt:1628813868878634968,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-20210813001622-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcd7047348b95
304f11961ab6d0a99de,},Annotations:map[string]string{io.kubernetes.container.hash: 1d5dd31d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:803caa744a8e8cabf6837af54c4c11725a378fbbb06d219681a16f0281fcfcb3,PodSandboxId:385db44b38c8b7ee4afbd6f499b52d0fa97cdc5e3ebc5fbe7fc1f3ead5c3c1d9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056,Annotations:map[string]string{},},ImageRef:5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056,State:CONTAINER_EXITED,CreatedAt:1628813867535915151,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-20210813001622-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 603b914543a305bf066
dc8de01ce2232,},Annotations:map[string]string{io.kubernetes.container.hash: 589bcd22,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9a8455b064798364aded95d5385e3bcfb0aaab6c987fe33408efe4b9674ddfd,PodSandboxId:48fd9f641438f1b90b068df7be0e85b20aa2fd34a5b07eb19eb5234b609893b6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2,Annotations:map[string]string{},},ImageRef:0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2,State:CONTAINER_EXITED,CreatedAt:1628813867455597250,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-20210813001622-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bfcdeec3c584f675942a6a7a9b0f15d,},Annotations:map[s
tring]string{io.kubernetes.container.hash: a3c4bc57,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:820468f3c8190601605731e6567ffb6f24378dfd0a502efee0ddcba319b2c3b2,PodSandboxId:122a471892cdfac5b97bf97cf90111bbf7d79dce56963540560b2405194fd701,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28,Annotations:map[string]string{},},ImageRef:78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28,State:CONTAINER_EXITED,CreatedAt:1628813867276039825,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-20210813001622-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb577061a17ad23cfbbf52e9419bf32a,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 99930feb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4173a91e-aaef-4cc8-b640-164642fcb5b6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:19:06 test-preload-20210813001622-820289 crio[4606]: time="2021-08-13 00:19:06.040683044Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=703f2b4e-4592-4c64-a27b-347f57212fa5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:19:06 test-preload-20210813001622-820289 crio[4606]: time="2021-08-13 00:19:06.040734408Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=703f2b4e-4592-4c64-a27b-347f57212fa5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:19:06 test-preload-20210813001622-820289 crio[4606]: time="2021-08-13 00:19:06.040984904Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:70407277ccc874dbdd900052c593084620058700f10ae3827e3caf148651d7c8,PodSandboxId:db3f74f5f0ce4164b5f36fafad269291cbadd840c411461b50d79e5c00356eb3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:e5cfb363c7caf31fd5a2a3dbd37b6bfc96099ce20209de8e6f55e00ae7ff56c7,State:CONTAINER_RUNNING,CreatedAt:1628813937317913867,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8m4f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28c49088-079a-4a84-a17a-db12715f9314,},Annotations:map[string]string{io.kubernetes.container.hash: 158b70b2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e03eecdf80a35d3a26282e1513c019c6d523b8ce63829f8adb2caa265e373d4c,PodSandboxId:13548c1ef1814b42b406bed6c740596b923279393507df00a9ad455293f408dd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:1951ff35852d301efa66f57928db5d5caa654068e5c6ea4d7b1c039dad1000d2,State:CONTAINER_RUNNING,CreatedAt:1628813937146013249,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6955765f44-qd5cs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce64704b-5280-4603-9103-0fc8f906d6eb,},Annotations:map[string]string{io.kubernetes.container.hash: 69c2cb84,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6da6e0a8e361e2e27ae7f595c6d188231c051fb763827e5aa8cfce4d27fb341,PodSandboxId:c59c5ebdb47f76ad7bfa9d5622a11e7f59f15e2fddf2955984f1378d3f630c9f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1628813936780201966,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 7a0dcf9e-e408-42b1-b85a-5ad22cab3a5d,},Annotations:map[string]string{io.kubernetes.container.hash: b47cb70c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4be56999c331937233d0d3f01adccd72eaddc405b50614a557de59dc46434ad,PodSandboxId:efa7202207d08ec1ef3568792c24c37cf697d16b962fe6619b3d64efbc1265bd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b0f1517c1f4bb153597033d2efd81a9ac630e6a569307f993b2c0368afcf0302,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:4be02aa0403799a8de33152a8e495114960f0ace9c1dacebc52df978f58f3555,State:CONTAINER_RUNNING,CreatedAt:1628813930478209041,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-20210813001622-820289,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: c7178d8492f798ee160e507a1f6158eb,},Annotations:map[string]string{io.kubernetes.container.hash: 313be8e8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c67f89bfced3d3a7cd506afb33eec820b09d77bc9600ae3fea8a068ef26449c,PodSandboxId:18dd37e579160406058c5350a4e63710ee697336971fb248e78de98efc7f5774,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90d27391b7808cde8d9a81cfa43b1e81de5c4912b4b52a7dccb19eb4fe3c236b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:692cc58967c316106ce83ca6a5c6617c2226126d44931e9372450364ad64c97b,State:CONTAINER_RUNNING,CreatedAt:1628813930073822940,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-20210813001622-820289,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: d1bfeb2307572255a384a184e54ad026,},Annotations:map[string]string{io.kubernetes.container.hash: e4b527d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4dd93d454d7f602b97428ed87ed225090f2e549186f0d866f88aca8c03c99ec,PodSandboxId:aa78581be8a6304f1c8e0bf3c95b149bc3f2dba1e1125808f68e37d4391020cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d109c0821a2b9225b69b99a95000df5cd1de5d606bc187b3620d730d7769c6ad,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:f1b432e0ed98f3e2792cf81d6eaa62dc4bf37dd2efd9000b21d206e891ab1fb4,State:CONTAINER_RUNNING,CreatedAt:1628813929896318496,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-20210813001622-820289,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 29b5a3494fd7c53351d2b61e9b662a3a,},Annotations:map[string]string{io.kubernetes.container.hash: e3642d59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:225ac55e8ba48055318bf4a4769068fc5320868ce87efef344a41ff833b2afff,PodSandboxId:e28746aaa68fd1b227bf2c780578b35a1efa0c39623f1c492afac324e0b3bbc9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:cfbff4b3797c497fcb2b550d8ce345151f79f7e650212cd63bf2dc3cd5428274,State:CONTAINER_RUNNING,CreatedAt:1628813929223688025,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-20210813001622-820289,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: dcd7047348b95304f11961ab6d0a99de,},Annotations:map[string]string{io.kubernetes.container.hash: 1d5dd31d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2d595a01e7c1af4e0cd4816f9bbf733a0f2002cdd046c625ed5a27ca318b2e9,PodSandboxId:c59c5ebdb47f76ad7bfa9d5622a11e7f59f15e2fddf2955984f1378d3f630c9f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1628813893740727778,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a0dcf9e-e408-42b1-b85a-5ad22cab3a5d,},Ann
otations:map[string]string{io.kubernetes.container.hash: b47cb70c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:008b8de52185ba93c748e98111e4051dbef1a3adb3818923a8ad7a9aeb54ce09,PodSandboxId:db3f74f5f0ce4164b5f36fafad269291cbadd840c411461b50d79e5c00356eb3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,Annotations:map[string]string{},},ImageRef:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,State:CONTAINER_EXITED,CreatedAt:1628813892851612186,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8m4f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28c49088-079a-4a84-a17a-db12715f9314,},Annotations:map[string]string{io.kubernetes.container.hash: 158b70b2,io.k
ubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:062f5ddbf3e0e03651ca624abe77646e4edfecbd7795562e39cc8ae45cda3f04,PodSandboxId:13548c1ef1814b42b406bed6c740596b923279393507df00a9ad455293f408dd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,Annotations:map[string]string{},},ImageRef:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,State:CONTAINER_EXITED,CreatedAt:1628813892154884252,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6955765f44-qd5cs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce64704b-5280-4603-9103-0fc8f906d6eb,},Annotations:map[string]string{io.kubernetes.container.hash: 69c2cb84,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4a8133fd9ebe23ad9db2caf1d929ccb898d91af70632e15166a95a155bfaf8d,PodSandboxId:e28746aaa68fd1b227bf2c780578b35a1efa0c39623f1c492afac324e0b3bbc9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,State:CONTAINER_EXITED,CreatedAt:1628813868878634968,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-20210813001622-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcd7047348b95
304f11961ab6d0a99de,},Annotations:map[string]string{io.kubernetes.container.hash: 1d5dd31d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:803caa744a8e8cabf6837af54c4c11725a378fbbb06d219681a16f0281fcfcb3,PodSandboxId:385db44b38c8b7ee4afbd6f499b52d0fa97cdc5e3ebc5fbe7fc1f3ead5c3c1d9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056,Annotations:map[string]string{},},ImageRef:5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056,State:CONTAINER_EXITED,CreatedAt:1628813867535915151,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-20210813001622-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 603b914543a305bf066
dc8de01ce2232,},Annotations:map[string]string{io.kubernetes.container.hash: 589bcd22,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9a8455b064798364aded95d5385e3bcfb0aaab6c987fe33408efe4b9674ddfd,PodSandboxId:48fd9f641438f1b90b068df7be0e85b20aa2fd34a5b07eb19eb5234b609893b6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2,Annotations:map[string]string{},},ImageRef:0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2,State:CONTAINER_EXITED,CreatedAt:1628813867455597250,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-20210813001622-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bfcdeec3c584f675942a6a7a9b0f15d,},Annotations:map[s
tring]string{io.kubernetes.container.hash: a3c4bc57,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:820468f3c8190601605731e6567ffb6f24378dfd0a502efee0ddcba319b2c3b2,PodSandboxId:122a471892cdfac5b97bf97cf90111bbf7d79dce56963540560b2405194fd701,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28,Annotations:map[string]string{},},ImageRef:78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28,State:CONTAINER_EXITED,CreatedAt:1628813867276039825,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-20210813001622-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb577061a17ad23cfbbf52e9419bf32a,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 99930feb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=703f2b4e-4592-4c64-a27b-347f57212fa5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:19:06 test-preload-20210813001622-820289 crio[4606]: time="2021-08-13 00:19:06.081421912Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=909c03b8-603f-4ea5-ac76-9f8708eed8f1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:19:06 test-preload-20210813001622-820289 crio[4606]: time="2021-08-13 00:19:06.081471574Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=909c03b8-603f-4ea5-ac76-9f8708eed8f1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Aug 13 00:19:06 test-preload-20210813001622-820289 crio[4606]: time="2021-08-13 00:19:06.082464488Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:70407277ccc874dbdd900052c593084620058700f10ae3827e3caf148651d7c8,PodSandboxId:db3f74f5f0ce4164b5f36fafad269291cbadd840c411461b50d79e5c00356eb3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:e5cfb363c7caf31fd5a2a3dbd37b6bfc96099ce20209de8e6f55e00ae7ff56c7,State:CONTAINER_RUNNING,CreatedAt:1628813937317913867,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8m4f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28c49088-079a-4a84-a17a-db12715f9314,},Annotations:map[string]string{io.kubernetes.container.hash: 158b70b2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e03eecdf80a35d3a26282e1513c019c6d523b8ce63829f8adb2caa265e373d4c,PodSandboxId:13548c1ef1814b42b406bed6c740596b923279393507df00a9ad455293f408dd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:1951ff35852d301efa66f57928db5d5caa654068e5c6ea4d7b1c039dad1000d2,State:CONTAINER_RUNNING,CreatedAt:1628813937146013249,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6955765f44-qd5cs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce64704b-5280-4603-9103-0fc8f906d6eb,},Annotations:map[string]string{io.kubernetes.container.hash: 69c2cb84,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6da6e0a8e361e2e27ae7f595c6d188231c051fb763827e5aa8cfce4d27fb341,PodSandboxId:c59c5ebdb47f76ad7bfa9d5622a11e7f59f15e2fddf2955984f1378d3f630c9f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1628813936780201966,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 7a0dcf9e-e408-42b1-b85a-5ad22cab3a5d,},Annotations:map[string]string{io.kubernetes.container.hash: b47cb70c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4be56999c331937233d0d3f01adccd72eaddc405b50614a557de59dc46434ad,PodSandboxId:efa7202207d08ec1ef3568792c24c37cf697d16b962fe6619b3d64efbc1265bd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b0f1517c1f4bb153597033d2efd81a9ac630e6a569307f993b2c0368afcf0302,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:4be02aa0403799a8de33152a8e495114960f0ace9c1dacebc52df978f58f3555,State:CONTAINER_RUNNING,CreatedAt:1628813930478209041,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-20210813001622-820289,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: c7178d8492f798ee160e507a1f6158eb,},Annotations:map[string]string{io.kubernetes.container.hash: 313be8e8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c67f89bfced3d3a7cd506afb33eec820b09d77bc9600ae3fea8a068ef26449c,PodSandboxId:18dd37e579160406058c5350a4e63710ee697336971fb248e78de98efc7f5774,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90d27391b7808cde8d9a81cfa43b1e81de5c4912b4b52a7dccb19eb4fe3c236b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:692cc58967c316106ce83ca6a5c6617c2226126d44931e9372450364ad64c97b,State:CONTAINER_RUNNING,CreatedAt:1628813930073822940,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-20210813001622-820289,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: d1bfeb2307572255a384a184e54ad026,},Annotations:map[string]string{io.kubernetes.container.hash: e4b527d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4dd93d454d7f602b97428ed87ed225090f2e549186f0d866f88aca8c03c99ec,PodSandboxId:aa78581be8a6304f1c8e0bf3c95b149bc3f2dba1e1125808f68e37d4391020cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d109c0821a2b9225b69b99a95000df5cd1de5d606bc187b3620d730d7769c6ad,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:f1b432e0ed98f3e2792cf81d6eaa62dc4bf37dd2efd9000b21d206e891ab1fb4,State:CONTAINER_RUNNING,CreatedAt:1628813929896318496,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-20210813001622-820289,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 29b5a3494fd7c53351d2b61e9b662a3a,},Annotations:map[string]string{io.kubernetes.container.hash: e3642d59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:225ac55e8ba48055318bf4a4769068fc5320868ce87efef344a41ff833b2afff,PodSandboxId:e28746aaa68fd1b227bf2c780578b35a1efa0c39623f1c492afac324e0b3bbc9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:cfbff4b3797c497fcb2b550d8ce345151f79f7e650212cd63bf2dc3cd5428274,State:CONTAINER_RUNNING,CreatedAt:1628813929223688025,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-20210813001622-820289,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: dcd7047348b95304f11961ab6d0a99de,},Annotations:map[string]string{io.kubernetes.container.hash: 1d5dd31d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2d595a01e7c1af4e0cd4816f9bbf733a0f2002cdd046c625ed5a27ca318b2e9,PodSandboxId:c59c5ebdb47f76ad7bfa9d5622a11e7f59f15e2fddf2955984f1378d3f630c9f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1628813893740727778,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a0dcf9e-e408-42b1-b85a-5ad22cab3a5d,},Ann
otations:map[string]string{io.kubernetes.container.hash: b47cb70c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:008b8de52185ba93c748e98111e4051dbef1a3adb3818923a8ad7a9aeb54ce09,PodSandboxId:db3f74f5f0ce4164b5f36fafad269291cbadd840c411461b50d79e5c00356eb3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,Annotations:map[string]string{},},ImageRef:7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19,State:CONTAINER_EXITED,CreatedAt:1628813892851612186,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8m4f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28c49088-079a-4a84-a17a-db12715f9314,},Annotations:map[string]string{io.kubernetes.container.hash: 158b70b2,io.k
ubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:062f5ddbf3e0e03651ca624abe77646e4edfecbd7795562e39cc8ae45cda3f04,PodSandboxId:13548c1ef1814b42b406bed6c740596b923279393507df00a9ad455293f408dd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,Annotations:map[string]string{},},ImageRef:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61,State:CONTAINER_EXITED,CreatedAt:1628813892154884252,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6955765f44-qd5cs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce64704b-5280-4603-9103-0fc8f906d6eb,},Annotations:map[string]string{io.kubernetes.container.hash: 69c2cb84,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4a8133fd9ebe23ad9db2caf1d929ccb898d91af70632e15166a95a155bfaf8d,PodSandboxId:e28746aaa68fd1b227bf2c780578b35a1efa0c39623f1c492afac324e0b3bbc9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,State:CONTAINER_EXITED,CreatedAt:1628813868878634968,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-20210813001622-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcd7047348b95
304f11961ab6d0a99de,},Annotations:map[string]string{io.kubernetes.container.hash: 1d5dd31d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:803caa744a8e8cabf6837af54c4c11725a378fbbb06d219681a16f0281fcfcb3,PodSandboxId:385db44b38c8b7ee4afbd6f499b52d0fa97cdc5e3ebc5fbe7fc1f3ead5c3c1d9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056,Annotations:map[string]string{},},ImageRef:5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056,State:CONTAINER_EXITED,CreatedAt:1628813867535915151,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-20210813001622-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 603b914543a305bf066
dc8de01ce2232,},Annotations:map[string]string{io.kubernetes.container.hash: 589bcd22,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9a8455b064798364aded95d5385e3bcfb0aaab6c987fe33408efe4b9674ddfd,PodSandboxId:48fd9f641438f1b90b068df7be0e85b20aa2fd34a5b07eb19eb5234b609893b6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2,Annotations:map[string]string{},},ImageRef:0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2,State:CONTAINER_EXITED,CreatedAt:1628813867455597250,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-20210813001622-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bfcdeec3c584f675942a6a7a9b0f15d,},Annotations:map[s
tring]string{io.kubernetes.container.hash: a3c4bc57,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:820468f3c8190601605731e6567ffb6f24378dfd0a502efee0ddcba319b2c3b2,PodSandboxId:122a471892cdfac5b97bf97cf90111bbf7d79dce56963540560b2405194fd701,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28,Annotations:map[string]string{},},ImageRef:78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28,State:CONTAINER_EXITED,CreatedAt:1628813867276039825,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-20210813001622-820289,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb577061a17ad23cfbbf52e9419bf32a,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 99930feb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=909c03b8-603f-4ea5-ac76-9f8708eed8f1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID
	70407277ccc87       7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19   8 seconds ago        Running             kube-proxy                1                   db3f74f5f0ce4
	e03eecdf80a35       70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61   9 seconds ago        Running             coredns                   1                   13548c1ef1814
	b6da6e0a8e361       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 seconds ago        Running             storage-provisioner       2                   c59c5ebdb47f7
	a4be56999c331       b0f1517c1f4bb153597033d2efd81a9ac630e6a569307f993b2c0368afcf0302   15 seconds ago       Running             kube-controller-manager   0                   efa7202207d08
	0c67f89bfced3       90d27391b7808cde8d9a81cfa43b1e81de5c4912b4b52a7dccb19eb4fe3c236b   16 seconds ago       Running             kube-apiserver            0                   18dd37e579160
	e4dd93d454d7f       d109c0821a2b9225b69b99a95000df5cd1de5d606bc187b3620d730d7769c6ad   16 seconds ago       Running             kube-scheduler            0                   aa78581be8a63
	225ac55e8ba48       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f   16 seconds ago       Running             etcd                      1                   e28746aaa68fd
	f2d595a01e7c1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   52 seconds ago       Exited              storage-provisioner       1                   c59c5ebdb47f7
	008b8de52185b       7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19   53 seconds ago       Exited              kube-proxy                0                   db3f74f5f0ce4
	062f5ddbf3e0e       70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61   54 seconds ago       Exited              coredns                   0                   13548c1ef1814
	e4a8133fd9ebe       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f   About a minute ago   Exited              etcd                      0                   e28746aaa68fd
	803caa744a8e8       5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056   About a minute ago   Exited              kube-controller-manager   0                   385db44b38c8b
	d9a8455b06479       0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2   About a minute ago   Exited              kube-apiserver            0                   48fd9f641438f
	820468f3c8190       78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28   About a minute ago   Exited              kube-scheduler            0                   122a471892cdf
	
	* 
	* ==> coredns [062f5ddbf3e0e03651ca624abe77646e4edfecbd7795562e39cc8ae45cda3f04] <==
	* E0813 00:18:12.398413       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
	E0813 00:18:12.398413       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
	E0813 00:18:12.398413       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
	E0813 00:18:12.398750       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
	E0813 00:18:12.398750       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
	E0813 00:18:12.398750       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
	E0813 00:18:13.400351       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
	E0813 00:18:13.400351       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
	E0813 00:18:13.400351       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
	E0813 00:18:13.402190       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
	E0813 00:18:13.402190       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
	E0813 00:18:13.402190       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
	E0813 00:18:13.402359       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
	E0813 00:18:13.402359       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
	E0813 00:18:13.402359       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
	CoreDNS-1.6.5
	linux/amd64, go1.13.4, c2fd1b2
	E0813 00:18:12.398422       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
	E0813 00:18:12.398413       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
	E0813 00:18:12.398750       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
	E0813 00:18:13.400351       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
	E0813 00:18:13.402190       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
	E0813 00:18:13.402359       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> coredns [e03eecdf80a35d3a26282e1513c019c6d523b8ce63829f8adb2caa265e373d4c] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = ce724ab0839054f2e7df24df11d60a5e
	CoreDNS-1.6.5
	linux/amd64, go1.13.4, c2fd1b2
	
	* 
	* ==> describe nodes <==
	* Name:               test-preload-20210813001622-820289
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-20210813001622-820289
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=dc1c3ca26e9449ce488a773126b8450402c94a19
	                    minikube.k8s.io/name=test-preload-20210813001622-820289
	                    minikube.k8s.io/updated_at=2021_08_13T00_17_56_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 00:17:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-20210813001622-820289
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Aug 2021 00:19:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 00:18:55 +0000   Fri, 13 Aug 2021 00:17:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 00:18:55 +0000   Fri, 13 Aug 2021 00:17:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 00:18:55 +0000   Fri, 13 Aug 2021 00:17:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Aug 2021 00:18:55 +0000   Fri, 13 Aug 2021 00:18:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.8
	  Hostname:    test-preload-20210813001622-820289
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2186320Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2186320Ki
	  pods:               110
	System Info:
	  Machine ID:                 b8f5a584897d475699b17c669c40be4e
	  System UUID:                b8f5a584-897d-4756-99b1-7c669c40be4e
	  Boot ID:                    1b83d3d5-bb73-4dc9-86ad-1f9c5482cba3
	  Kernel Version:             4.19.182
	  OS Image:                   Buildroot 2020.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.20.2
	  Kubelet Version:            v1.17.3
	  Kube-Proxy Version:         v1.17.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6955765f44-qd5cs                                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (7%!)(MISSING)     56s
	  kube-system                 etcd-test-preload-20210813001622-820289                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	  kube-system                 kube-apiserver-test-preload-20210813001622-820289             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12s
	  kube-system                 kube-controller-manager-test-preload-20210813001622-820289    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  kube-system                 kube-proxy-h8m4f                                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         56s
	  kube-system                 kube-scheduler-test-preload-20210813001622-820289             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  kube-system                 storage-provisioner                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (3%!)(MISSING)   170Mi (7%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                            Message
	  ----    ------                   ----               ----                                            -------
	  Normal  NodeHasSufficientMemory  81s (x5 over 81s)  kubelet, test-preload-20210813001622-820289     Node test-preload-20210813001622-820289 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    81s (x5 over 81s)  kubelet, test-preload-20210813001622-820289     Node test-preload-20210813001622-820289 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     81s (x4 over 81s)  kubelet, test-preload-20210813001622-820289     Node test-preload-20210813001622-820289 status is now: NodeHasSufficientPID
	  Normal  Starting                 70s                kubelet, test-preload-20210813001622-820289     Starting kubelet.
	  Normal  NodeHasSufficientMemory  70s                kubelet, test-preload-20210813001622-820289     Node test-preload-20210813001622-820289 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    70s                kubelet, test-preload-20210813001622-820289     Node test-preload-20210813001622-820289 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     70s                kubelet, test-preload-20210813001622-820289     Node test-preload-20210813001622-820289 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  70s                kubelet, test-preload-20210813001622-820289     Updated Node Allocatable limit across pods
	  Normal  NodeReady                60s                kubelet, test-preload-20210813001622-820289     Node test-preload-20210813001622-820289 status is now: NodeReady
	  Normal  Starting                 53s                kube-proxy, test-preload-20210813001622-820289  Starting kube-proxy.
	  Normal  Starting                 18s                kubelet, test-preload-20210813001622-820289     Starting kubelet.
	  Normal  NodeHasSufficientMemory  18s (x8 over 18s)  kubelet, test-preload-20210813001622-820289     Node test-preload-20210813001622-820289 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18s (x7 over 18s)  kubelet, test-preload-20210813001622-820289     Node test-preload-20210813001622-820289 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18s (x8 over 18s)  kubelet, test-preload-20210813001622-820289     Node test-preload-20210813001622-820289 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18s                kubelet, test-preload-20210813001622-820289     Updated Node Allocatable limit across pods
	  Normal  Starting                 9s                 kube-proxy, test-preload-20210813001622-820289  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.122100] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.698495] Unstable clock detected, switching default tracing clock to "global"
	              If you want to keep using the local clock, then add:
	                "trace_clock=local"
	              on the kernel command line
	[  +0.000024] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +4.085062] systemd-fstab-generator[1161]: Ignoring "noauto" for root device
	[  +0.065874] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +1.059136] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1730 comm=systemd-network
	[  +1.352797] vboxguest: loading out-of-tree module taints kernel.
	[  +0.005658] vboxguest: PCI device not found, probably running on physical hardware.
	[  +1.858491] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[  +5.312887] systemd-fstab-generator[2137]: Ignoring "noauto" for root device
	[  +0.128577] systemd-fstab-generator[2150]: Ignoring "noauto" for root device
	[  +0.188972] systemd-fstab-generator[2176]: Ignoring "noauto" for root device
	[Aug13 00:17] systemd-fstab-generator[3323]: Ignoring "noauto" for root device
	[ +13.757142] systemd-fstab-generator[3750]: Ignoring "noauto" for root device
	[Aug13 00:18] kauditd_printk_skb: 29 callbacks suppressed
	[ +11.783363] systemd-fstab-generator[4871]: Ignoring "noauto" for root device
	[  +0.215180] systemd-fstab-generator[4884]: Ignoring "noauto" for root device
	[  +0.238138] systemd-fstab-generator[4905]: Ignoring "noauto" for root device
	[ +17.427592] NFSD: Unable to end grace period: -110
	[  +7.211997] systemd-fstab-generator[5984]: Ignoring "noauto" for root device
	[  +9.768930] kauditd_printk_skb: 59 callbacks suppressed
	
	* 
	* ==> etcd [225ac55e8ba48055318bf4a4769068fc5320868ce87efef344a41ff833b2afff] <==
	* 2021-08-13 00:18:49.982595 I | embed: initial advertise peer URLs = https://192.168.39.8:2380
	2021-08-13 00:18:49.982663 I | embed: initial cluster = 
	2021-08-13 00:18:50.008802 I | etcdserver: restarting member 5d432f19cde6e0bf in cluster ebeeb2da37a85eb1 at commit index 415
	raft2021/08/13 00:18:50 INFO: 5d432f19cde6e0bf switched to configuration voters=()
	raft2021/08/13 00:18:50 INFO: 5d432f19cde6e0bf became follower at term 2
	raft2021/08/13 00:18:50 INFO: newRaft 5d432f19cde6e0bf [peers: [], term: 2, commit: 415, applied: 0, lastindex: 415, lastterm: 2]
	2021-08-13 00:18:50.055568 W | auth: simple token is not cryptographically signed
	2021-08-13 00:18:50.059938 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2021-08-13 00:18:50.062121 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-13 00:18:50.062243 I | embed: listening for metrics on http://127.0.0.1:2381
	2021-08-13 00:18:50.062413 I | embed: listening for peers on 192.168.39.8:2380
	raft2021/08/13 00:18:50 INFO: 5d432f19cde6e0bf switched to configuration voters=(6720266856842059967)
	2021-08-13 00:18:50.067834 I | etcdserver/membership: added member 5d432f19cde6e0bf [https://192.168.39.8:2380] to cluster ebeeb2da37a85eb1
	2021-08-13 00:18:50.067932 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-13 00:18:50.067961 I | etcdserver/api: enabled capabilities for version 3.4
	raft2021/08/13 00:18:51 INFO: 5d432f19cde6e0bf is starting a new election at term 2
	raft2021/08/13 00:18:51 INFO: 5d432f19cde6e0bf became candidate at term 3
	raft2021/08/13 00:18:51 INFO: 5d432f19cde6e0bf received MsgVoteResp from 5d432f19cde6e0bf at term 3
	raft2021/08/13 00:18:51 INFO: 5d432f19cde6e0bf became leader at term 3
	raft2021/08/13 00:18:51 INFO: raft.node: 5d432f19cde6e0bf elected leader 5d432f19cde6e0bf at term 3
	2021-08-13 00:18:51.526983 I | etcdserver: published {Name:test-preload-20210813001622-820289 ClientURLs:[https://192.168.39.8:2379]} to cluster ebeeb2da37a85eb1
	2021-08-13 00:18:51.527426 I | embed: ready to serve client requests
	2021-08-13 00:18:51.528821 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-13 00:18:51.529032 I | embed: ready to serve client requests
	2021-08-13 00:18:51.530014 I | embed: serving client requests on 192.168.39.8:2379
	
	* 
	* ==> etcd [e4a8133fd9ebe23ad9db2caf1d929ccb898d91af70632e15166a95a155bfaf8d] <==
	* raft2021/08/13 00:17:48 INFO: 5d432f19cde6e0bf became follower at term 1
	raft2021/08/13 00:17:48 INFO: 5d432f19cde6e0bf switched to configuration voters=(6720266856842059967)
	2021-08-13 00:17:48.973832 W | auth: simple token is not cryptographically signed
	2021-08-13 00:17:48.978412 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2021-08-13 00:17:48.981930 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-13 00:17:48.982082 I | embed: listening for metrics on http://127.0.0.1:2381
	2021-08-13 00:17:48.982178 I | embed: listening for peers on 192.168.39.8:2380
	2021-08-13 00:17:48.983137 I | etcdserver: 5d432f19cde6e0bf as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2021/08/13 00:17:48 INFO: 5d432f19cde6e0bf switched to configuration voters=(6720266856842059967)
	2021-08-13 00:17:48.983305 I | etcdserver/membership: added member 5d432f19cde6e0bf [https://192.168.39.8:2380] to cluster ebeeb2da37a85eb1
	raft2021/08/13 00:17:49 INFO: 5d432f19cde6e0bf is starting a new election at term 1
	raft2021/08/13 00:17:49 INFO: 5d432f19cde6e0bf became candidate at term 2
	raft2021/08/13 00:17:49 INFO: 5d432f19cde6e0bf received MsgVoteResp from 5d432f19cde6e0bf at term 2
	raft2021/08/13 00:17:49 INFO: 5d432f19cde6e0bf became leader at term 2
	raft2021/08/13 00:17:49 INFO: raft.node: 5d432f19cde6e0bf elected leader 5d432f19cde6e0bf at term 2
	2021-08-13 00:17:49.566986 I | etcdserver: setting up the initial cluster version to 3.4
	2021-08-13 00:17:49.568442 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-13 00:17:49.568637 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-13 00:17:49.568665 I | etcdserver: published {Name:test-preload-20210813001622-820289 ClientURLs:[https://192.168.39.8:2379]} to cluster ebeeb2da37a85eb1
	2021-08-13 00:17:49.568909 I | embed: ready to serve client requests
	2021-08-13 00:17:49.569982 I | embed: ready to serve client requests
	2021-08-13 00:17:49.570466 I | embed: serving client requests on 192.168.39.8:2379
	2021-08-13 00:17:49.571305 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-13 00:18:30.117388 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:764" took too long (153.342086ms) to execute
	2021-08-13 00:18:44.924026 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:172" took too long (599.648899ms) to execute
	
	* 
	* ==> kernel <==
	*  00:19:06 up 2 min,  0 users,  load average: 1.98, 0.94, 0.36
	Linux test-preload-20210813001622-820289 4.19.182 #1 SMP Fri Aug 6 09:11:32 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [0c67f89bfced3d3a7cd506afb33eec820b09d77bc9600ae3fea8a068ef26449c] <==
	* I0813 00:18:54.539741       1 controller.go:81] Starting OpenAPI AggregationController
	I0813 00:18:54.540008       1 controller.go:85] Starting OpenAPI controller
	I0813 00:18:54.540115       1 customresource_discovery_controller.go:208] Starting DiscoveryController
	I0813 00:18:54.540142       1 naming_controller.go:288] Starting NamingConditionController
	I0813 00:18:54.540240       1 establishing_controller.go:73] Starting EstablishingController
	I0813 00:18:54.540377       1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
	I0813 00:18:54.540469       1 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0813 00:18:54.599970       1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	I0813 00:18:54.600081       1 dynamic_cafile_content.go:166] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	I0813 00:18:54.640392       1 shared_informer.go:204] Caches are synced for crd-autoregister 
	I0813 00:18:54.643954       1 shared_informer.go:204] Caches are synced for cluster_authentication_trust_controller 
	I0813 00:18:54.656308       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	E0813 00:18:54.664929       1 controller.go:151] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0813 00:18:54.728236       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0813 00:18:54.733795       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0813 00:18:54.734292       1 cache.go:39] Caches are synced for autoregister controller
	I0813 00:18:55.527348       1 controller.go:107] OpenAPI AggregationController: Processing item 
	I0813 00:18:55.527421       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0813 00:18:55.527455       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0813 00:18:55.548091       1 storage_scheduling.go:142] all system priority classes are created successfully or already exist.
	I0813 00:18:56.296681       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0813 00:18:56.324374       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0813 00:18:56.394886       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0813 00:18:56.413888       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0813 00:18:56.426071       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-apiserver [d9a8455b064798364aded95d5385e3bcfb0aaab6c987fe33408efe4b9674ddfd] <==
	* W0813 00:18:46.140476       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:49314->127.0.0.1:2379: read: connection reset by peer". Reconnecting...
	W0813 00:18:46.140814       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:49316->127.0.0.1:2379: read: connection reset by peer". Reconnecting...
	W0813 00:18:46.140960       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:49318->127.0.0.1:2379: read: connection reset by peer". Reconnecting...
	W0813 00:18:46.141110       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:49320->127.0.0.1:2379: read: connection reset by peer". Reconnecting...
	W0813 00:18:46.141164       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:49324->127.0.0.1:2379: read: connection reset by peer". Reconnecting...
	W0813 00:18:46.141324       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:49322->127.0.0.1:2379: read: connection reset by peer". Reconnecting...
	W0813 00:18:46.141381       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:49326->127.0.0.1:2379: read: connection reset by peer". Reconnecting...
	W0813 00:18:46.141616       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:49328->127.0.0.1:2379: read: connection reset by peer". Reconnecting...
	W0813 00:18:46.141772       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:49330->127.0.0.1:2379: read: connection reset by peer". Reconnecting...
	W0813 00:18:46.141925       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:49332->127.0.0.1:2379: read: connection reset by peer". Reconnecting...
	W0813 00:18:46.142067       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:49334->127.0.0.1:2379: read: connection reset by peer". Reconnecting...
	W0813 00:18:46.142121       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:49338->127.0.0.1:2379: read: connection reset by peer". Reconnecting...
	W0813 00:18:46.142182       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:49336->127.0.0.1:2379: read: connection reset by peer". Reconnecting...
	W0813 00:18:46.142234       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:49340->127.0.0.1:2379: read: connection reset by peer". Reconnecting...
	W0813 00:18:46.142392       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:49342->127.0.0.1:2379: read: connection reset by peer". Reconnecting...
	W0813 00:18:46.142613       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:49344->127.0.0.1:2379: read: connection reset by peer". Reconnecting...
	W0813 00:18:46.142763       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:49346->127.0.0.1:2379: read: connection reset by peer". Reconnecting...
	W0813 00:18:46.142906       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:49350->127.0.0.1:2379: read: connection reset by peer". Reconnecting...
	W0813 00:18:46.143049       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:49348->127.0.0.1:2379: read: connection reset by peer". Reconnecting...
	W0813 00:18:46.143196       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:49352->127.0.0.1:2379: read: connection reset by peer". Reconnecting...
	W0813 00:18:46.143345       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:49354->127.0.0.1:2379: read: connection reset by peer". Reconnecting...
	W0813 00:18:46.143400       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:49356->127.0.0.1:2379: read: connection reset by peer". Reconnecting...
	W0813 00:18:46.143637       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:49358->127.0.0.1:2379: read: connection reset by peer". Reconnecting...
	W0813 00:18:46.143862       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:49360->127.0.0.1:2379: read: connection reset by peer". Reconnecting...
	W0813 00:18:46.144013       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:49362->127.0.0.1:2379: read: connection reset by peer". Reconnecting...
	
	* 
	* ==> kube-controller-manager [803caa744a8e8cabf6837af54c4c11725a378fbbb06d219681a16f0281fcfcb3] <==
	* I0813 00:18:10.523814       1 shared_informer.go:197] Waiting for caches to sync for cidrallocator
	I0813 00:18:10.523837       1 shared_informer.go:204] Caches are synced for cidrallocator 
	I0813 00:18:10.527271       1 shared_informer.go:204] Caches are synced for attach detach 
	I0813 00:18:10.531674       1 range_allocator.go:373] Set node test-preload-20210813001622-820289 PodCIDR to [10.244.0.0/24]
	I0813 00:18:10.537466       1 shared_informer.go:204] Caches are synced for taint 
	I0813 00:18:10.537923       1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone: 
	W0813 00:18:10.538028       1 node_lifecycle_controller.go:1058] Missing timestamp for Node test-preload-20210813001622-820289. Assuming now as a timestamp.
	I0813 00:18:10.538067       1 node_lifecycle_controller.go:1259] Controller detected that zone  is now in state Normal.
	I0813 00:18:10.538926       1 taint_manager.go:186] Starting NoExecuteTaintManager
	I0813 00:18:10.539194       1 event.go:281] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"test-preload-20210813001622-820289", UID:"2d10a998-897d-4045-ae28-3e6a053ae573", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node test-preload-20210813001622-820289 event: Registered Node test-preload-20210813001622-820289 in Controller
	I0813 00:18:10.568600       1 shared_informer.go:204] Caches are synced for TTL 
	I0813 00:18:10.571453       1 shared_informer.go:204] Caches are synced for daemon sets 
	I0813 00:18:10.588059       1 event.go:281] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"ce4d86e9-7c14-42f5-bdbe-db0d08201938", APIVersion:"apps/v1", ResourceVersion:"208", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-h8m4f
	I0813 00:18:10.609928       1 shared_informer.go:204] Caches are synced for deployment 
	I0813 00:18:10.615436       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"102dea55-b301-45bb-8477-08006fcc1cca", APIVersion:"apps/v1", ResourceVersion:"311", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-6955765f44 to 1
	I0813 00:18:10.619165       1 shared_informer.go:204] Caches are synced for disruption 
	I0813 00:18:10.619366       1 disruption.go:338] Sending events to api server.
	I0813 00:18:10.646752       1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-6955765f44", UID:"ec2c92a0-5b51-4c86-b071-87e11a7c45f9", APIVersion:"apps/v1", ResourceVersion:"322", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-6955765f44-qd5cs
	E0813 00:18:10.674844       1 daemon_controller.go:290] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"ce4d86e9-7c14-42f5-bdbe-db0d08201938", ResourceVersion:"208", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764410676, loc:(*time.Location)(0x6b951c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc000e68c20), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Names
pace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeS
ource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc000b28200), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000e68c40), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolu
meSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIV
olumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000e68c60), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.A
zureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.17.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc000e68ca0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMo
de)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc000de91d0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000378a08), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"beta.kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServic
eAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0012b1980), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy
{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00000e160)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000378ac8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0813 00:18:10.687723       1 shared_informer.go:204] Caches are synced for garbage collector 
	I0813 00:18:10.690621       1 garbagecollector.go:138] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0813 00:18:10.725077       1 shared_informer.go:204] Caches are synced for resource quota 
	I0813 00:18:10.727151       1 shared_informer.go:204] Caches are synced for resource quota 
	I0813 00:18:10.749766       1 shared_informer.go:204] Caches are synced for garbage collector 
	
	* 
	* ==> kube-controller-manager [a4be56999c331937233d0d3f01adccd72eaddc405b50614a557de59dc46434ad] <==
	* I0813 00:18:57.076110       1 replica_set.go:180] Starting replicationcontroller controller
	I0813 00:18:57.076438       1 shared_informer.go:197] Waiting for caches to sync for ReplicationController
	I0813 00:18:57.616331       1 controllermanager.go:533] Started "garbagecollector"
	I0813 00:18:57.617846       1 garbagecollector.go:129] Starting garbage collector controller
	I0813 00:18:57.617860       1 shared_informer.go:197] Waiting for caches to sync for garbage collector
	I0813 00:18:57.617985       1 graph_builder.go:282] GraphBuilder running
	I0813 00:18:57.636823       1 controllermanager.go:533] Started "job"
	I0813 00:18:57.636901       1 job_controller.go:143] Starting job controller
	I0813 00:18:57.637080       1 shared_informer.go:197] Waiting for caches to sync for job
	I0813 00:18:57.651873       1 controllermanager.go:533] Started "persistentvolume-expander"
	W0813 00:18:57.651974       1 controllermanager.go:525] Skipping "root-ca-cert-publisher"
	W0813 00:18:57.651983       1 core.go:246] configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes.
	W0813 00:18:57.651989       1 controllermanager.go:525] Skipping "route"
	I0813 00:18:57.651933       1 expand_controller.go:319] Starting expand controller
	I0813 00:18:57.652065       1 shared_informer.go:197] Waiting for caches to sync for expand
	I0813 00:18:57.657850       1 controllermanager.go:533] Started "podgc"
	I0813 00:18:57.657928       1 gc_controller.go:88] Starting GC controller
	I0813 00:18:57.658164       1 shared_informer.go:197] Waiting for caches to sync for GC
	I0813 00:18:57.667001       1 controllermanager.go:533] Started "daemonset"
	I0813 00:18:57.667068       1 daemon_controller.go:255] Starting daemon sets controller
	I0813 00:18:57.667589       1 shared_informer.go:197] Waiting for caches to sync for daemon sets
	I0813 00:18:57.679097       1 controllermanager.go:533] Started "csrapproving"
	I0813 00:18:57.679274       1 certificate_controller.go:118] Starting certificate controller "csrapproving"
	I0813 00:18:57.680106       1 shared_informer.go:197] Waiting for caches to sync for certificate-csrapproving
	I0813 00:18:57.686957       1 node_ipam_controller.go:94] Sending events to api server.
	
	* 
	* ==> kube-proxy [008b8de52185ba93c748e98111e4051dbef1a3adb3818923a8ad7a9aeb54ce09] <==
	* W0813 00:18:13.192996       1 server_others.go:323] Unknown proxy mode "", assuming iptables proxy
	I0813 00:18:13.210426       1 node.go:135] Successfully retrieved node IP: 192.168.39.8
	I0813 00:18:13.210643       1 server_others.go:145] Using iptables Proxier.
	I0813 00:18:13.211862       1 server.go:571] Version: v1.17.0
	I0813 00:18:13.221149       1 config.go:313] Starting service config controller
	I0813 00:18:13.221406       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0813 00:18:13.221679       1 config.go:131] Starting endpoints config controller
	I0813 00:18:13.221900       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0813 00:18:13.322761       1 shared_informer.go:204] Caches are synced for endpoints config 
	I0813 00:18:13.322979       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-proxy [70407277ccc874dbdd900052c593084620058700f10ae3827e3caf148651d7c8] <==
	* W0813 00:18:57.539967       1 server_others.go:323] Unknown proxy mode "", assuming iptables proxy
	I0813 00:18:57.552088       1 node.go:135] Successfully retrieved node IP: 192.168.39.8
	I0813 00:18:57.552189       1 server_others.go:145] Using iptables Proxier.
	I0813 00:18:57.553127       1 server.go:571] Version: v1.17.0
	I0813 00:18:57.558865       1 config.go:313] Starting service config controller
	I0813 00:18:57.559046       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0813 00:18:57.559284       1 config.go:131] Starting endpoints config controller
	I0813 00:18:57.559376       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0813 00:18:57.660723       1 shared_informer.go:204] Caches are synced for endpoints config 
	I0813 00:18:57.661181       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [820468f3c8190601605731e6567ffb6f24378dfd0a502efee0ddcba319b2c3b2] <==
	* E0813 00:17:53.810921       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 00:17:53.812892       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 00:17:53.815112       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 00:17:53.819165       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 00:17:53.820289       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 00:17:53.825170       1 reflector.go:156] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 00:17:53.826875       1 reflector.go:156] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 00:17:53.829087       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 00:17:53.830373       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 00:17:53.831360       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 00:17:53.832885       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 00:17:53.833803       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0813 00:17:54.892873       1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0813 00:18:46.495212       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?allowWatchBookmarks=true&resourceVersion=203&timeout=5m39s&timeoutSeconds=339&watch=true: dial tcp 192.168.39.8:8443: connect: connection refused
	E0813 00:18:46.495864       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StatefulSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=1&timeout=8m42s&timeoutSeconds=522&watch=true: dial tcp 192.168.39.8:8443: connect: connection refused
	E0813 00:18:46.496057       1 reflector.go:320] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to watch *v1.ConfigMap: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&resourceVersion=155&timeout=9m55s&timeoutSeconds=595&watch=true: dial tcp 192.168.39.8:8443: connect: connection refused
	E0813 00:18:46.496315       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=384&timeout=5m48s&timeoutSeconds=348&watch=true: dial tcp 192.168.39.8:8443: connect: connection refused
	E0813 00:18:46.496802       1 reflector.go:320] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to watch *v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%3DFailed%!C(MISSING)status.phase%3DSucceeded&resourceVersion=371&timeoutSeconds=338&watch=true: dial tcp 192.168.39.8:8443: connect: connection refused
	E0813 00:18:46.497897       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolume: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=1&timeout=7m45s&timeoutSeconds=465&watch=true: dial tcp 192.168.39.8:8443: connect: connection refused
	E0813 00:18:46.498234       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicationController: Get https://control-plane.minikube.internal:8443/api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=1&timeout=6m28s&timeoutSeconds=388&watch=true: dial tcp 192.168.39.8:8443: connect: connection refused
	E0813 00:18:46.498664       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicaSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=372&timeout=9m38s&timeoutSeconds=578&watch=true: dial tcp 192.168.39.8:8443: connect: connection refused
	E0813 00:18:46.498842       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolumeClaim: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=1&timeout=6m22s&timeoutSeconds=382&watch=true: dial tcp 192.168.39.8:8443: connect: connection refused
	E0813 00:18:46.499385       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StorageClass: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=341&timeout=6m19s&timeoutSeconds=379&watch=true: dial tcp 192.168.39.8:8443: connect: connection refused
	E0813 00:18:46.499759       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSINode: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csinodes?allowWatchBookmarks=true&resourceVersion=42&timeout=6m48s&timeoutSeconds=408&watch=true: dial tcp 192.168.39.8:8443: connect: connection refused
	E0813 00:18:46.500445       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.PodDisruptionBudget: Get https://control-plane.minikube.internal:8443/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=1&timeout=7m32s&timeoutSeconds=452&watch=true: dial tcp 192.168.39.8:8443: connect: connection refused
	
	* 
	* ==> kube-scheduler [e4dd93d454d7f602b97428ed87ed225090f2e549186f0d866f88aca8c03c99ec] <==
	* I0813 00:18:51.020882       1 serving.go:312] Generated self-signed cert in-memory
	W0813 00:18:51.279604       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found
	W0813 00:18:51.279747       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found
	W0813 00:18:54.629034       1 authentication.go:348] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0813 00:18:54.629132       1 authentication.go:296] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0813 00:18:54.629141       1 authentication.go:297] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0813 00:18:54.629148       1 authentication.go:298] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	W0813 00:18:54.662903       1 authorization.go:47] Authorization is disabled
	W0813 00:18:54.664078       1 authentication.go:92] Authentication is disabled
	I0813 00:18:54.665056       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0813 00:18:54.686148       1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0813 00:18:54.686386       1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0813 00:18:54.689473       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0813 00:18:54.690134       1 tlsconfig.go:219] Starting DynamicServingCertificateController
	I0813 00:18:54.786869       1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 00:16:36 UTC, end at Fri 2021-08-13 00:19:06 UTC. --
	Aug 13 00:18:54 test-preload-20210813001622-820289 kubelet[5992]: W0813 00:18:54.679544    5992 kubelet.go:1649] Deleted mirror pod "kube-controller-manager-test-preload-20210813001622-820289_kube-system(73c5a5fc-7344-4541-a40b-b9eecdc269f4)" because it is outdated
	Aug 13 00:18:54 test-preload-20210813001622-820289 kubelet[5992]: I0813 00:18:54.750730    5992 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-s28fj" (UniqueName: "kubernetes.io/secret/ce64704b-5280-4603-9103-0fc8f906d6eb-coredns-token-s28fj") pod "coredns-6955765f44-qd5cs" (UID: "ce64704b-5280-4603-9103-0fc8f906d6eb")
	Aug 13 00:18:54 test-preload-20210813001622-820289 kubelet[5992]: I0813 00:18:54.751080    5992 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/28c49088-079a-4a84-a17a-db12715f9314-lib-modules") pod "kube-proxy-h8m4f" (UID: "28c49088-079a-4a84-a17a-db12715f9314")
	Aug 13 00:18:54 test-preload-20210813001622-820289 kubelet[5992]: I0813 00:18:54.751312    5992 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-s7dsb" (UniqueName: "kubernetes.io/secret/7a0dcf9e-e408-42b1-b85a-5ad22cab3a5d-storage-provisioner-token-s7dsb") pod "storage-provisioner" (UID: "7a0dcf9e-e408-42b1-b85a-5ad22cab3a5d")
	Aug 13 00:18:54 test-preload-20210813001622-820289 kubelet[5992]: I0813 00:18:54.751615    5992 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/28c49088-079a-4a84-a17a-db12715f9314-kube-proxy") pod "kube-proxy-h8m4f" (UID: "28c49088-079a-4a84-a17a-db12715f9314")
	Aug 13 00:18:54 test-preload-20210813001622-820289 kubelet[5992]: I0813 00:18:54.751843    5992 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ce64704b-5280-4603-9103-0fc8f906d6eb-config-volume") pod "coredns-6955765f44-qd5cs" (UID: "ce64704b-5280-4603-9103-0fc8f906d6eb")
	Aug 13 00:18:54 test-preload-20210813001622-820289 kubelet[5992]: I0813 00:18:54.752059    5992 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/28c49088-079a-4a84-a17a-db12715f9314-xtables-lock") pod "kube-proxy-h8m4f" (UID: "28c49088-079a-4a84-a17a-db12715f9314")
	Aug 13 00:18:54 test-preload-20210813001622-820289 kubelet[5992]: I0813 00:18:54.752279    5992 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-cwn5h" (UniqueName: "kubernetes.io/secret/28c49088-079a-4a84-a17a-db12715f9314-kube-proxy-token-cwn5h") pod "kube-proxy-h8m4f" (UID: "28c49088-079a-4a84-a17a-db12715f9314")
	Aug 13 00:18:54 test-preload-20210813001622-820289 kubelet[5992]: I0813 00:18:54.752548    5992 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/7a0dcf9e-e408-42b1-b85a-5ad22cab3a5d-tmp") pod "storage-provisioner" (UID: "7a0dcf9e-e408-42b1-b85a-5ad22cab3a5d")
	Aug 13 00:18:54 test-preload-20210813001622-820289 kubelet[5992]: I0813 00:18:54.752714    5992 reconciler.go:156] Reconciler: start to sync state
	Aug 13 00:18:55 test-preload-20210813001622-820289 kubelet[5992]: I0813 00:18:55.406681    5992 kubelet_node_status.go:112] Node test-preload-20210813001622-820289 was previously registered
	Aug 13 00:18:55 test-preload-20210813001622-820289 kubelet[5992]: I0813 00:18:55.407647    5992 kubelet_node_status.go:73] Successfully registered node test-preload-20210813001622-820289
	Aug 13 00:18:55 test-preload-20210813001622-820289 kubelet[5992]: E0813 00:18:55.858065    5992 configmap.go:200] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Aug 13 00:18:55 test-preload-20210813001622-820289 kubelet[5992]: E0813 00:18:55.858263    5992 nestedpendingoperations.go:270] Operation for "\"kubernetes.io/configmap/28c49088-079a-4a84-a17a-db12715f9314-kube-proxy\" (\"28c49088-079a-4a84-a17a-db12715f9314\")" failed. No retries permitted until 2021-08-13 00:18:56.358238611 +0000 UTC m=+8.239110533 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/28c49088-079a-4a84-a17a-db12715f9314-kube-proxy\") pod \"kube-proxy-h8m4f\" (UID: \"28c49088-079a-4a84-a17a-db12715f9314\") : failed to sync configmap cache: timed out waiting for the condition"
	Aug 13 00:18:55 test-preload-20210813001622-820289 kubelet[5992]: E0813 00:18:55.858291    5992 secret.go:195] Couldn't get secret kube-system/storage-provisioner-token-s7dsb: failed to sync secret cache: timed out waiting for the condition
	Aug 13 00:18:55 test-preload-20210813001622-820289 kubelet[5992]: E0813 00:18:55.858330    5992 nestedpendingoperations.go:270] Operation for "\"kubernetes.io/secret/7a0dcf9e-e408-42b1-b85a-5ad22cab3a5d-storage-provisioner-token-s7dsb\" (\"7a0dcf9e-e408-42b1-b85a-5ad22cab3a5d\")" failed. No retries permitted until 2021-08-13 00:18:56.358314847 +0000 UTC m=+8.239186781 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"storage-provisioner-token-s7dsb\" (UniqueName: \"kubernetes.io/secret/7a0dcf9e-e408-42b1-b85a-5ad22cab3a5d-storage-provisioner-token-s7dsb\") pod \"storage-provisioner\" (UID: \"7a0dcf9e-e408-42b1-b85a-5ad22cab3a5d\") : failed to sync secret cache: timed out waiting for the condition"
	Aug 13 00:18:55 test-preload-20210813001622-820289 kubelet[5992]: E0813 00:18:55.858353    5992 secret.go:195] Couldn't get secret kube-system/coredns-token-s28fj: failed to sync secret cache: timed out waiting for the condition
	Aug 13 00:18:55 test-preload-20210813001622-820289 kubelet[5992]: E0813 00:18:55.858401    5992 nestedpendingoperations.go:270] Operation for "\"kubernetes.io/secret/ce64704b-5280-4603-9103-0fc8f906d6eb-coredns-token-s28fj\" (\"ce64704b-5280-4603-9103-0fc8f906d6eb\")" failed. No retries permitted until 2021-08-13 00:18:56.358381588 +0000 UTC m=+8.239253605 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"coredns-token-s28fj\" (UniqueName: \"kubernetes.io/secret/ce64704b-5280-4603-9103-0fc8f906d6eb-coredns-token-s28fj\") pod \"coredns-6955765f44-qd5cs\" (UID: \"ce64704b-5280-4603-9103-0fc8f906d6eb\") : failed to sync secret cache: timed out waiting for the condition"
	Aug 13 00:18:55 test-preload-20210813001622-820289 kubelet[5992]: E0813 00:18:55.858415    5992 configmap.go:200] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	Aug 13 00:18:55 test-preload-20210813001622-820289 kubelet[5992]: E0813 00:18:55.858445    5992 nestedpendingoperations.go:270] Operation for "\"kubernetes.io/configmap/ce64704b-5280-4603-9103-0fc8f906d6eb-config-volume\" (\"ce64704b-5280-4603-9103-0fc8f906d6eb\")" failed. No retries permitted until 2021-08-13 00:18:56.358432836 +0000 UTC m=+8.239304670 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce64704b-5280-4603-9103-0fc8f906d6eb-config-volume\") pod \"coredns-6955765f44-qd5cs\" (UID: \"ce64704b-5280-4603-9103-0fc8f906d6eb\") : failed to sync configmap cache: timed out waiting for the condition"
	Aug 13 00:18:55 test-preload-20210813001622-820289 kubelet[5992]: E0813 00:18:55.858705    5992 secret.go:195] Couldn't get secret kube-system/kube-proxy-token-cwn5h: failed to sync secret cache: timed out waiting for the condition
	Aug 13 00:18:55 test-preload-20210813001622-820289 kubelet[5992]: E0813 00:18:55.858817    5992 nestedpendingoperations.go:270] Operation for "\"kubernetes.io/secret/28c49088-079a-4a84-a17a-db12715f9314-kube-proxy-token-cwn5h\" (\"28c49088-079a-4a84-a17a-db12715f9314\")" failed. No retries permitted until 2021-08-13 00:18:56.358798071 +0000 UTC m=+8.239669959 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"kube-proxy-token-cwn5h\" (UniqueName: \"kubernetes.io/secret/28c49088-079a-4a84-a17a-db12715f9314-kube-proxy-token-cwn5h\") pod \"kube-proxy-h8m4f\" (UID: \"28c49088-079a-4a84-a17a-db12715f9314\") : failed to sync secret cache: timed out waiting for the condition"
	Aug 13 00:18:57 test-preload-20210813001622-820289 kubelet[5992]: W0813 00:18:57.596453    5992 pod_container_deletor.go:75] Container "385db44b38c8b7ee4afbd6f499b52d0fa97cdc5e3ebc5fbe7fc1f3ead5c3c1d9" not found in pod's containers
	Aug 13 00:18:57 test-preload-20210813001622-820289 kubelet[5992]: W0813 00:18:57.639167    5992 pod_container_deletor.go:75] Container "122a471892cdfac5b97bf97cf90111bbf7d79dce56963540560b2405194fd701" not found in pod's containers
	Aug 13 00:18:57 test-preload-20210813001622-820289 kubelet[5992]: W0813 00:18:57.644251    5992 pod_container_deletor.go:75] Container "48fd9f641438f1b90b068df7be0e85b20aa2fd34a5b07eb19eb5234b609893b6" not found in pod's containers
	
	* 
	* ==> storage-provisioner [b6da6e0a8e361e2e27ae7f595c6d188231c051fb763827e5aa8cfce4d27fb341] <==
	* I0813 00:18:56.918344       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 00:18:56.955001       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 00:18:56.955761       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	* 
	* ==> storage-provisioner [f2d595a01e7c1af4e0cd4816f9bbf733a0f2002cdd046c625ed5a27ca318b2e9] <==
	* I0813 00:18:13.821410       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 00:18:13.845640       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 00:18:13.846152       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0813 00:18:13.854994       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0813 00:18:13.855462       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"40f89104-72b4-434e-bfad-bc0f89347b28", APIVersion:"v1", ResourceVersion:"367", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' test-preload-20210813001622-820289_1df5b662-9520-4750-9a2c-7d6b7c3b5727 became leader
	I0813 00:18:13.855861       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_test-preload-20210813001622-820289_1df5b662-9520-4750-9a2c-7d6b7c3b5727!
	I0813 00:18:13.957403       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_test-preload-20210813001622-820289_1df5b662-9520-4750-9a2c-7d6b7c3b5727!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-20210813001622-820289 -n test-preload-20210813001622-820289
helpers_test.go:262: (dbg) Run:  kubectl --context test-preload-20210813001622-820289 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: 
helpers_test.go:273: ======> post-mortem[TestPreload]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context test-preload-20210813001622-820289 describe pod 
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context test-preload-20210813001622-820289 describe pod : exit status 1 (49.193935ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context test-preload-20210813001622-820289 describe pod : exit status 1
helpers_test.go:176: Cleaning up "test-preload-20210813001622-820289" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-20210813001622-820289
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-20210813001622-820289: (1.139974026s)
--- FAIL: TestPreload (166.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (553.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p calico-20210813002446-820289 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=kvm2  --container-runtime=crio

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:98: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p calico-20210813002446-820289 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=kvm2  --container-runtime=crio: exit status 80 (9m13.66685844s)

                                                
                                                
-- stdout --
	* [calico-20210813002446-820289] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	  - MINIKUBE_LOCATION=12230
	* Using the kvm2 driver based on user configuration
	* Starting control plane node calico-20210813002446-820289 in cluster calico-20210813002446-820289
	* Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.21.3 on CRI-O 1.20.2 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 00:35:57.658890  865003 out.go:298] Setting OutFile to fd 1 ...
	I0813 00:35:57.658979  865003 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 00:35:57.658987  865003 out.go:311] Setting ErrFile to fd 2...
	I0813 00:35:57.658990  865003 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 00:35:57.659084  865003 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	I0813 00:35:57.659344  865003 out.go:305] Setting JSON to false
	I0813 00:35:57.701880  865003 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":15521,"bootTime":1628799437,"procs":202,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 00:35:57.702005  865003 start.go:121] virtualization: kvm guest
	I0813 00:35:57.704573  865003 out.go:177] * [calico-20210813002446-820289] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 00:35:57.706130  865003 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0813 00:35:57.704717  865003 notify.go:169] Checking for updates...
	I0813 00:35:57.707574  865003 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 00:35:57.709059  865003 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	I0813 00:35:57.710545  865003 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 00:35:57.711342  865003 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 00:35:57.745154  865003 out.go:177] * Using the kvm2 driver based on user configuration
	I0813 00:35:57.745180  865003 start.go:278] selected driver: kvm2
	I0813 00:35:57.745186  865003 start.go:751] validating driver "kvm2" against <nil>
	I0813 00:35:57.745202  865003 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0813 00:35:57.746364  865003 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 00:35:57.746508  865003 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0813 00:35:57.759317  865003 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0813 00:35:57.759381  865003 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0813 00:35:57.759555  865003 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 00:35:57.759592  865003 cni.go:93] Creating CNI manager for "calico"
	I0813 00:35:57.759601  865003 start_flags.go:272] Found "Calico" CNI - setting NetworkPlugin=cni
	I0813 00:35:57.759613  865003 start_flags.go:277] config:
	{Name:calico-20210813002446-820289 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:calico-20210813002446-820289 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Netw
orkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 00:35:57.759777  865003 iso.go:123] acquiring lock: {Name:mk52748db467e5aa4b344902ee09c9ea40467a67 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 00:35:57.762159  865003 out.go:177] * Starting control plane node calico-20210813002446-820289 in cluster calico-20210813002446-820289
	I0813 00:35:57.762189  865003 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 00:35:57.762221  865003 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4
	I0813 00:35:57.762249  865003 cache.go:56] Caching tarball of preloaded images
	I0813 00:35:57.762351  865003 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0813 00:35:57.762376  865003 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on crio
	I0813 00:35:57.762508  865003 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813002446-820289/config.json ...
	I0813 00:35:57.762534  865003 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813002446-820289/config.json: {Name:mkd697d7ff045db9ab537a50749ad3ab8f83c9c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:35:57.762692  865003 cache.go:205] Successfully downloaded all kic artifacts
	I0813 00:35:57.762718  865003 start.go:313] acquiring machines lock for calico-20210813002446-820289: {Name:mk2d46e46728943fc604570595bb7616469b4e8e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 00:35:57.836310  865003 start.go:317] acquired machines lock for "calico-20210813002446-820289" in 73.567858ms
	I0813 00:35:57.836370  865003 start.go:89] Provisioning new machine with config: &{Name:calico-20210813002446-820289 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12122/minikube-v1.22.0-1628238775-12122.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 Clu
sterName:calico-20210813002446-820289 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 00:35:57.836461  865003 start.go:126] createHost starting for "" (driver="kvm2")
	I0813 00:35:57.838689  865003 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0813 00:35:57.838868  865003 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 00:35:57.838926  865003 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 00:35:57.854305  865003 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40915
	I0813 00:35:57.855562  865003 main.go:130] libmachine: () Calling .GetVersion
	I0813 00:35:57.856192  865003 main.go:130] libmachine: Using API Version  1
	I0813 00:35:57.856213  865003 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 00:35:57.856687  865003 main.go:130] libmachine: () Calling .GetMachineName
	I0813 00:35:57.856904  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetMachineName
	I0813 00:35:57.857089  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .DriverName
	I0813 00:35:57.857282  865003 start.go:160] libmachine.API.Create for "calico-20210813002446-820289" (driver="kvm2")
	I0813 00:35:57.857347  865003 client.go:168] LocalClient.Create starting
	I0813 00:35:57.857398  865003 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem
	I0813 00:35:57.857433  865003 main.go:130] libmachine: Decoding PEM data...
	I0813 00:35:57.857456  865003 main.go:130] libmachine: Parsing certificate...
	I0813 00:35:57.857625  865003 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem
	I0813 00:35:57.857657  865003 main.go:130] libmachine: Decoding PEM data...
	I0813 00:35:57.857678  865003 main.go:130] libmachine: Parsing certificate...
	I0813 00:35:57.857742  865003 main.go:130] libmachine: Running pre-create checks...
	I0813 00:35:57.857759  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .PreCreateCheck
	I0813 00:35:57.858122  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetConfigRaw
	I0813 00:35:57.858614  865003 main.go:130] libmachine: Creating machine...
	I0813 00:35:57.858631  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .Create
	I0813 00:35:57.858806  865003 main.go:130] libmachine: (calico-20210813002446-820289) Creating KVM machine...
	I0813 00:35:57.861639  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | found existing default KVM network
	I0813 00:35:57.863983  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | I0813 00:35:57.863798  865027 network.go:240] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:4d:87:7c}}
	I0813 00:35:57.866067  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | I0813 00:35:57.865940  865027 network.go:288] reserving subnet 192.168.50.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.50.0:0xc0000bc800] misses:0}
	I0813 00:35:57.866112  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | I0813 00:35:57.866004  865027 network.go:235] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0813 00:35:57.890324  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | trying to create private KVM network mk-calico-20210813002446-820289 192.168.50.0/24...
	I0813 00:35:58.188337  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | private KVM network mk-calico-20210813002446-820289 192.168.50.0/24 created
	I0813 00:35:58.188386  865003 main.go:130] libmachine: (calico-20210813002446-820289) Setting up store path in /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/calico-20210813002446-820289 ...
	I0813 00:35:58.188407  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | I0813 00:35:58.188274  865027 common.go:108] Making disk image using store path: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	I0813 00:35:58.188437  865003 main.go:130] libmachine: (calico-20210813002446-820289) Building disk image from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/iso/minikube-v1.22.0-1628238775-12122.iso
	I0813 00:35:58.188523  865003 main.go:130] libmachine: (calico-20210813002446-820289) Downloading /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/iso/minikube-v1.22.0-1628238775-12122.iso...
	I0813 00:35:58.394568  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | I0813 00:35:58.394406  865027 common.go:115] Creating ssh key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/calico-20210813002446-820289/id_rsa...
	I0813 00:35:58.602826  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | I0813 00:35:58.602675  865027 common.go:121] Creating raw disk image: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/calico-20210813002446-820289/calico-20210813002446-820289.rawdisk...
	I0813 00:35:58.602876  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | Writing magic tar header
	I0813 00:35:58.602908  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | Writing SSH key tar header
	I0813 00:35:58.602928  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | I0813 00:35:58.602826  865027 common.go:135] Fixing permissions on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/calico-20210813002446-820289 ...
	I0813 00:35:58.602951  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/calico-20210813002446-820289
	I0813 00:35:58.603014  865003 main.go:130] libmachine: (calico-20210813002446-820289) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/calico-20210813002446-820289 (perms=drwx------)
	I0813 00:35:58.603041  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines
	I0813 00:35:58.603052  865003 main.go:130] libmachine: (calico-20210813002446-820289) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines (perms=drwxr-xr-x)
	I0813 00:35:58.603064  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	I0813 00:35:58.603081  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b
	I0813 00:35:58.603101  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0813 00:35:58.603119  865003 main.go:130] libmachine: (calico-20210813002446-820289) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube (perms=drwxr-xr-x)
	I0813 00:35:58.603145  865003 main.go:130] libmachine: (calico-20210813002446-820289) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b (perms=drwxr-xr-x)
	I0813 00:35:58.603164  865003 main.go:130] libmachine: (calico-20210813002446-820289) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxr-xr-x)
	I0813 00:35:58.603175  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | Checking permissions on dir: /home/jenkins
	I0813 00:35:58.603201  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | Checking permissions on dir: /home
	I0813 00:35:58.603213  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | Skipping /home - not owner
	I0813 00:35:58.603238  865003 main.go:130] libmachine: (calico-20210813002446-820289) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0813 00:35:58.603254  865003 main.go:130] libmachine: (calico-20210813002446-820289) Creating domain...
	I0813 00:35:58.630195  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | domain calico-20210813002446-820289 has defined MAC address 52:54:00:97:c4:67 in network default
	I0813 00:35:58.630696  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | domain calico-20210813002446-820289 has defined MAC address 52:54:00:a1:07:a7 in network mk-calico-20210813002446-820289
	I0813 00:35:58.630715  865003 main.go:130] libmachine: (calico-20210813002446-820289) Ensuring networks are active...
	I0813 00:35:58.632915  865003 main.go:130] libmachine: (calico-20210813002446-820289) Ensuring network default is active
	I0813 00:35:58.633238  865003 main.go:130] libmachine: (calico-20210813002446-820289) Ensuring network mk-calico-20210813002446-820289 is active
	I0813 00:35:58.633885  865003 main.go:130] libmachine: (calico-20210813002446-820289) Getting domain xml...
	I0813 00:35:58.635901  865003 main.go:130] libmachine: (calico-20210813002446-820289) Creating domain...
	I0813 00:35:59.035314  865003 main.go:130] libmachine: (calico-20210813002446-820289) Waiting to get IP...
	I0813 00:35:59.036255  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | domain calico-20210813002446-820289 has defined MAC address 52:54:00:a1:07:a7 in network mk-calico-20210813002446-820289
	I0813 00:35:59.036725  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | unable to find current IP address of domain calico-20210813002446-820289 in network mk-calico-20210813002446-820289
	I0813 00:35:59.036790  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | I0813 00:35:59.036704  865027 retry.go:31] will retry after 263.082536ms: waiting for machine to come up
	I0813 00:35:59.301010  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | domain calico-20210813002446-820289 has defined MAC address 52:54:00:a1:07:a7 in network mk-calico-20210813002446-820289
	I0813 00:35:59.301462  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | unable to find current IP address of domain calico-20210813002446-820289 in network mk-calico-20210813002446-820289
	I0813 00:35:59.301494  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | I0813 00:35:59.301405  865027 retry.go:31] will retry after 381.329545ms: waiting for machine to come up
	I0813 00:35:59.684097  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | domain calico-20210813002446-820289 has defined MAC address 52:54:00:a1:07:a7 in network mk-calico-20210813002446-820289
	I0813 00:35:59.684700  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | unable to find current IP address of domain calico-20210813002446-820289 in network mk-calico-20210813002446-820289
	I0813 00:35:59.684735  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | I0813 00:35:59.684649  865027 retry.go:31] will retry after 422.765636ms: waiting for machine to come up
	I0813 00:36:00.109442  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | domain calico-20210813002446-820289 has defined MAC address 52:54:00:a1:07:a7 in network mk-calico-20210813002446-820289
	I0813 00:36:00.109978  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | unable to find current IP address of domain calico-20210813002446-820289 in network mk-calico-20210813002446-820289
	I0813 00:36:00.110011  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | I0813 00:36:00.109916  865027 retry.go:31] will retry after 473.074753ms: waiting for machine to come up
	I0813 00:36:00.584160  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | domain calico-20210813002446-820289 has defined MAC address 52:54:00:a1:07:a7 in network mk-calico-20210813002446-820289
	I0813 00:36:00.584731  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | unable to find current IP address of domain calico-20210813002446-820289 in network mk-calico-20210813002446-820289
	I0813 00:36:00.584768  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | I0813 00:36:00.584680  865027 retry.go:31] will retry after 587.352751ms: waiting for machine to come up
	I0813 00:36:01.173332  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | domain calico-20210813002446-820289 has defined MAC address 52:54:00:a1:07:a7 in network mk-calico-20210813002446-820289
	I0813 00:36:01.173775  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | unable to find current IP address of domain calico-20210813002446-820289 in network mk-calico-20210813002446-820289
	I0813 00:36:01.173807  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | I0813 00:36:01.173737  865027 retry.go:31] will retry after 834.206799ms: waiting for machine to come up
	I0813 00:36:02.009812  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | domain calico-20210813002446-820289 has defined MAC address 52:54:00:a1:07:a7 in network mk-calico-20210813002446-820289
	I0813 00:36:02.010307  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | unable to find current IP address of domain calico-20210813002446-820289 in network mk-calico-20210813002446-820289
	I0813 00:36:02.010342  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | I0813 00:36:02.010256  865027 retry.go:31] will retry after 746.553905ms: waiting for machine to come up
	I0813 00:36:02.758691  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | domain calico-20210813002446-820289 has defined MAC address 52:54:00:a1:07:a7 in network mk-calico-20210813002446-820289
	I0813 00:36:02.759278  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | unable to find current IP address of domain calico-20210813002446-820289 in network mk-calico-20210813002446-820289
	I0813 00:36:02.759308  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | I0813 00:36:02.759227  865027 retry.go:31] will retry after 987.362415ms: waiting for machine to come up
	I0813 00:36:03.748113  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | domain calico-20210813002446-820289 has defined MAC address 52:54:00:a1:07:a7 in network mk-calico-20210813002446-820289
	I0813 00:36:03.748677  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | unable to find current IP address of domain calico-20210813002446-820289 in network mk-calico-20210813002446-820289
	I0813 00:36:03.748701  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | I0813 00:36:03.748572  865027 retry.go:31] will retry after 1.189835008s: waiting for machine to come up
	I0813 00:36:04.940017  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | domain calico-20210813002446-820289 has defined MAC address 52:54:00:a1:07:a7 in network mk-calico-20210813002446-820289
	I0813 00:36:04.940572  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | unable to find current IP address of domain calico-20210813002446-820289 in network mk-calico-20210813002446-820289
	I0813 00:36:04.940626  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | I0813 00:36:04.940555  865027 retry.go:31] will retry after 1.677229867s: waiting for machine to come up
	I0813 00:36:06.619082  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | domain calico-20210813002446-820289 has defined MAC address 52:54:00:a1:07:a7 in network mk-calico-20210813002446-820289
	I0813 00:36:06.619595  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | unable to find current IP address of domain calico-20210813002446-820289 in network mk-calico-20210813002446-820289
	I0813 00:36:06.619632  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | I0813 00:36:06.619515  865027 retry.go:31] will retry after 2.346016261s: waiting for machine to come up
	I0813 00:36:08.967172  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | domain calico-20210813002446-820289 has defined MAC address 52:54:00:a1:07:a7 in network mk-calico-20210813002446-820289
	I0813 00:36:08.967728  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | unable to find current IP address of domain calico-20210813002446-820289 in network mk-calico-20210813002446-820289
	I0813 00:36:08.967904  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | I0813 00:36:08.967816  865027 retry.go:31] will retry after 3.36678925s: waiting for machine to come up
	I0813 00:36:12.337822  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | domain calico-20210813002446-820289 has defined MAC address 52:54:00:a1:07:a7 in network mk-calico-20210813002446-820289
	I0813 00:36:12.338376  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | unable to find current IP address of domain calico-20210813002446-820289 in network mk-calico-20210813002446-820289
	I0813 00:36:12.338408  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | I0813 00:36:12.338319  865027 retry.go:31] will retry after 3.11822781s: waiting for machine to come up
	I0813 00:36:15.458336  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | domain calico-20210813002446-820289 has defined MAC address 52:54:00:a1:07:a7 in network mk-calico-20210813002446-820289
	I0813 00:36:15.458852  865003 main.go:130] libmachine: (calico-20210813002446-820289) Found IP for machine: 192.168.50.30
	I0813 00:36:15.458874  865003 main.go:130] libmachine: (calico-20210813002446-820289) Reserving static IP address...
	I0813 00:36:15.458898  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | domain calico-20210813002446-820289 has current primary IP address 192.168.50.30 and MAC address 52:54:00:a1:07:a7 in network mk-calico-20210813002446-820289
	I0813 00:36:15.459237  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | unable to find host DHCP lease matching {name: "calico-20210813002446-820289", mac: "52:54:00:a1:07:a7", ip: "192.168.50.30"} in network mk-calico-20210813002446-820289
	I0813 00:36:15.512065  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | Getting to WaitForSSH function...
	I0813 00:36:15.512103  865003 main.go:130] libmachine: (calico-20210813002446-820289) Reserved static IP address: 192.168.50.30
	I0813 00:36:15.512113  865003 main.go:130] libmachine: (calico-20210813002446-820289) Waiting for SSH to be available...
	I0813 00:36:15.518592  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | domain calico-20210813002446-820289 has defined MAC address 52:54:00:a1:07:a7 in network mk-calico-20210813002446-820289
	I0813 00:36:15.519205  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:07:a7", ip: ""} in network mk-calico-20210813002446-820289: {Iface:virbr2 ExpiryTime:2021-08-13 01:36:14 +0000 UTC Type:0 Mac:52:54:00:a1:07:a7 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a1:07:a7}
	I0813 00:36:15.519232  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | domain calico-20210813002446-820289 has defined IP address 192.168.50.30 and MAC address 52:54:00:a1:07:a7 in network mk-calico-20210813002446-820289
	I0813 00:36:15.519443  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | Using SSH client type: external
	I0813 00:36:15.519476  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/calico-20210813002446-820289/id_rsa (-rw-------)
	I0813 00:36:15.519514  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.30 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/calico-20210813002446-820289/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0813 00:36:15.519529  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | About to run SSH command:
	I0813 00:36:15.519582  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | exit 0
	I0813 00:36:15.677761  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | SSH cmd err, output: <nil>: 
	I0813 00:36:15.678411  865003 main.go:130] libmachine: (calico-20210813002446-820289) KVM machine creation complete!
	I0813 00:36:15.678415  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetConfigRaw
	I0813 00:36:15.679181  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .DriverName
	I0813 00:36:15.679697  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .DriverName
	I0813 00:36:15.679943  865003 main.go:130] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0813 00:36:15.679963  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetState
	I0813 00:36:15.683184  865003 main.go:130] libmachine: Detecting operating system of created instance...
	I0813 00:36:15.683199  865003 main.go:130] libmachine: Waiting for SSH to be available...
	I0813 00:36:15.683209  865003 main.go:130] libmachine: Getting to WaitForSSH function...
	I0813 00:36:15.683219  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetSSHHostname
	I0813 00:36:15.689134  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | domain calico-20210813002446-820289 has defined MAC address 52:54:00:a1:07:a7 in network mk-calico-20210813002446-820289
	I0813 00:36:15.689542  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:07:a7", ip: ""} in network mk-calico-20210813002446-820289: {Iface:virbr2 ExpiryTime:2021-08-13 01:36:14 +0000 UTC Type:0 Mac:52:54:00:a1:07:a7 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:calico-20210813002446-820289 Clientid:01:52:54:00:a1:07:a7}
	I0813 00:36:15.689570  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | domain calico-20210813002446-820289 has defined IP address 192.168.50.30 and MAC address 52:54:00:a1:07:a7 in network mk-calico-20210813002446-820289
	I0813 00:36:15.689720  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetSSHPort
	I0813 00:36:15.689890  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetSSHKeyPath
	I0813 00:36:15.690067  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetSSHKeyPath
	I0813 00:36:15.690247  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetSSHUsername
	I0813 00:36:15.690431  865003 main.go:130] libmachine: Using SSH client type: native
	I0813 00:36:15.690631  865003 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.50.30 22 <nil> <nil>}
	I0813 00:36:15.690643  865003 main.go:130] libmachine: About to run SSH command:
	exit 0
	I0813 00:36:15.838869  865003 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 00:36:15.838896  865003 main.go:130] libmachine: Detecting the provisioner...
	I0813 00:36:15.838907  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetSSHHostname
	I0813 00:36:15.845351  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | domain calico-20210813002446-820289 has defined MAC address 52:54:00:a1:07:a7 in network mk-calico-20210813002446-820289
	I0813 00:36:15.845819  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:07:a7", ip: ""} in network mk-calico-20210813002446-820289: {Iface:virbr2 ExpiryTime:2021-08-13 01:36:14 +0000 UTC Type:0 Mac:52:54:00:a1:07:a7 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:calico-20210813002446-820289 Clientid:01:52:54:00:a1:07:a7}
	I0813 00:36:15.845848  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | domain calico-20210813002446-820289 has defined IP address 192.168.50.30 and MAC address 52:54:00:a1:07:a7 in network mk-calico-20210813002446-820289
	I0813 00:36:15.846234  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetSSHPort
	I0813 00:36:15.846404  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetSSHKeyPath
	I0813 00:36:15.846627  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetSSHKeyPath
	I0813 00:36:15.846813  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetSSHUsername
	I0813 00:36:15.847063  865003 main.go:130] libmachine: Using SSH client type: native
	I0813 00:36:15.847261  865003 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.50.30 22 <nil> <nil>}
	I0813 00:36:15.847282  865003 main.go:130] libmachine: About to run SSH command:
	cat /etc/os-release
	I0813 00:36:16.014580  865003 main.go:130] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2020.02.12
	ID=buildroot
	VERSION_ID=2020.02.12
	PRETTY_NAME="Buildroot 2020.02.12"
	
	I0813 00:36:16.014722  865003 main.go:130] libmachine: found compatible host: buildroot
	I0813 00:36:16.014741  865003 main.go:130] libmachine: Provisioning with buildroot...
	I0813 00:36:16.014757  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetMachineName
	I0813 00:36:16.015071  865003 buildroot.go:166] provisioning hostname "calico-20210813002446-820289"
	I0813 00:36:16.015138  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetMachineName
	I0813 00:36:16.015371  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetSSHHostname
	I0813 00:36:16.021909  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | domain calico-20210813002446-820289 has defined MAC address 52:54:00:a1:07:a7 in network mk-calico-20210813002446-820289
	I0813 00:36:16.022234  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:07:a7", ip: ""} in network mk-calico-20210813002446-820289: {Iface:virbr2 ExpiryTime:2021-08-13 01:36:14 +0000 UTC Type:0 Mac:52:54:00:a1:07:a7 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:calico-20210813002446-820289 Clientid:01:52:54:00:a1:07:a7}
	I0813 00:36:16.022273  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | domain calico-20210813002446-820289 has defined IP address 192.168.50.30 and MAC address 52:54:00:a1:07:a7 in network mk-calico-20210813002446-820289
	I0813 00:36:16.022522  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetSSHPort
	I0813 00:36:16.022740  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetSSHKeyPath
	I0813 00:36:16.022908  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetSSHKeyPath
	I0813 00:36:16.023100  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetSSHUsername
	I0813 00:36:16.023317  865003 main.go:130] libmachine: Using SSH client type: native
	I0813 00:36:16.023510  865003 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.50.30 22 <nil> <nil>}
	I0813 00:36:16.023526  865003 main.go:130] libmachine: About to run SSH command:
	sudo hostname calico-20210813002446-820289 && echo "calico-20210813002446-820289" | sudo tee /etc/hostname
	I0813 00:36:16.212756  865003 main.go:130] libmachine: SSH cmd err, output: <nil>: calico-20210813002446-820289
	
	I0813 00:36:16.212792  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetSSHHostname
	I0813 00:36:16.219082  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | domain calico-20210813002446-820289 has defined MAC address 52:54:00:a1:07:a7 in network mk-calico-20210813002446-820289
	I0813 00:36:16.219523  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:07:a7", ip: ""} in network mk-calico-20210813002446-820289: {Iface:virbr2 ExpiryTime:2021-08-13 01:36:14 +0000 UTC Type:0 Mac:52:54:00:a1:07:a7 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:calico-20210813002446-820289 Clientid:01:52:54:00:a1:07:a7}
	I0813 00:36:16.219553  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | domain calico-20210813002446-820289 has defined IP address 192.168.50.30 and MAC address 52:54:00:a1:07:a7 in network mk-calico-20210813002446-820289
	I0813 00:36:16.219880  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetSSHPort
	I0813 00:36:16.220131  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetSSHKeyPath
	I0813 00:36:16.220308  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetSSHKeyPath
	I0813 00:36:16.220450  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetSSHUsername
	I0813 00:36:16.220640  865003 main.go:130] libmachine: Using SSH client type: native
	I0813 00:36:16.220887  865003 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.50.30 22 <nil> <nil>}
	I0813 00:36:16.220920  865003 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-20210813002446-820289' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-20210813002446-820289/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-20210813002446-820289' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 00:36:16.377528  865003 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 00:36:16.377567  865003 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem ServerCertRemotePath
:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube}
	I0813 00:36:16.377600  865003 buildroot.go:174] setting up certificates
	I0813 00:36:16.377613  865003 provision.go:83] configureAuth start
	I0813 00:36:16.377626  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetMachineName
	I0813 00:36:16.377949  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetIP
	I0813 00:36:16.384290  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | domain calico-20210813002446-820289 has defined MAC address 52:54:00:a1:07:a7 in network mk-calico-20210813002446-820289
	I0813 00:36:16.384759  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:07:a7", ip: ""} in network mk-calico-20210813002446-820289: {Iface:virbr2 ExpiryTime:2021-08-13 01:36:14 +0000 UTC Type:0 Mac:52:54:00:a1:07:a7 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:calico-20210813002446-820289 Clientid:01:52:54:00:a1:07:a7}
	I0813 00:36:16.384792  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | domain calico-20210813002446-820289 has defined IP address 192.168.50.30 and MAC address 52:54:00:a1:07:a7 in network mk-calico-20210813002446-820289
	I0813 00:36:16.385014  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetSSHHostname
	I0813 00:36:16.390551  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | domain calico-20210813002446-820289 has defined MAC address 52:54:00:a1:07:a7 in network mk-calico-20210813002446-820289
	I0813 00:36:16.390969  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:07:a7", ip: ""} in network mk-calico-20210813002446-820289: {Iface:virbr2 ExpiryTime:2021-08-13 01:36:14 +0000 UTC Type:0 Mac:52:54:00:a1:07:a7 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:calico-20210813002446-820289 Clientid:01:52:54:00:a1:07:a7}
	I0813 00:36:16.391000  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | domain calico-20210813002446-820289 has defined IP address 192.168.50.30 and MAC address 52:54:00:a1:07:a7 in network mk-calico-20210813002446-820289
	I0813 00:36:16.391272  865003 provision.go:137] copyHostCerts
	I0813 00:36:16.391355  865003 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem, removing ...
	I0813 00:36:16.391369  865003 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem
	I0813 00:36:16.391429  865003 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem (1679 bytes)
	I0813 00:36:16.391522  865003 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem, removing ...
	I0813 00:36:16.391535  865003 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem
	I0813 00:36:16.391557  865003 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem (1078 bytes)
	I0813 00:36:16.391610  865003 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem, removing ...
	I0813 00:36:16.391620  865003 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem
	I0813 00:36:16.391641  865003 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem (1123 bytes)
	I0813 00:36:16.391694  865003 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem org=jenkins.calico-20210813002446-820289 san=[192.168.50.30 192.168.50.30 localhost 127.0.0.1 minikube calico-20210813002446-820289]
	I0813 00:36:16.597475  865003 provision.go:171] copyRemoteCerts
	I0813 00:36:16.597553  865003 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 00:36:16.597600  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetSSHHostname
	I0813 00:36:16.603690  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | domain calico-20210813002446-820289 has defined MAC address 52:54:00:a1:07:a7 in network mk-calico-20210813002446-820289
	I0813 00:36:16.604126  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:07:a7", ip: ""} in network mk-calico-20210813002446-820289: {Iface:virbr2 ExpiryTime:2021-08-13 01:36:14 +0000 UTC Type:0 Mac:52:54:00:a1:07:a7 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:calico-20210813002446-820289 Clientid:01:52:54:00:a1:07:a7}
	I0813 00:36:16.604158  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | domain calico-20210813002446-820289 has defined IP address 192.168.50.30 and MAC address 52:54:00:a1:07:a7 in network mk-calico-20210813002446-820289
	I0813 00:36:16.604394  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetSSHPort
	I0813 00:36:16.604611  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetSSHKeyPath
	I0813 00:36:16.604814  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetSSHUsername
	I0813 00:36:16.604994  865003 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/calico-20210813002446-820289/id_rsa Username:docker}
	I0813 00:36:16.697952  865003 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 00:36:16.720255  865003 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0813 00:36:16.742023  865003 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0813 00:36:16.763187  865003 provision.go:86] duration metric: configureAuth took 385.560104ms
	I0813 00:36:16.763214  865003 buildroot.go:189] setting minikube options for container-runtime
	I0813 00:36:16.763507  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetSSHHostname
	I0813 00:36:16.770164  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | domain calico-20210813002446-820289 has defined MAC address 52:54:00:a1:07:a7 in network mk-calico-20210813002446-820289
	I0813 00:36:16.770584  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:07:a7", ip: ""} in network mk-calico-20210813002446-820289: {Iface:virbr2 ExpiryTime:2021-08-13 01:36:14 +0000 UTC Type:0 Mac:52:54:00:a1:07:a7 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:calico-20210813002446-820289 Clientid:01:52:54:00:a1:07:a7}
	I0813 00:36:16.770621  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | domain calico-20210813002446-820289 has defined IP address 192.168.50.30 and MAC address 52:54:00:a1:07:a7 in network mk-calico-20210813002446-820289
	I0813 00:36:16.770831  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetSSHPort
	I0813 00:36:16.771031  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetSSHKeyPath
	I0813 00:36:16.771209  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetSSHKeyPath
	I0813 00:36:16.771374  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetSSHUsername
	I0813 00:36:16.771536  865003 main.go:130] libmachine: Using SSH client type: native
	I0813 00:36:16.771739  865003 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 192.168.50.30 22 <nil> <nil>}
	I0813 00:36:16.771770  865003 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 00:36:17.249036  865003 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 00:36:17.249075  865003 main.go:130] libmachine: Checking connection to Docker...
	I0813 00:36:17.249089  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetURL
	I0813 00:36:17.252125  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | Using libvirt version 3000000
	I0813 00:36:17.257830  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | domain calico-20210813002446-820289 has defined MAC address 52:54:00:a1:07:a7 in network mk-calico-20210813002446-820289
	I0813 00:36:17.258285  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:07:a7", ip: ""} in network mk-calico-20210813002446-820289: {Iface:virbr2 ExpiryTime:2021-08-13 01:36:14 +0000 UTC Type:0 Mac:52:54:00:a1:07:a7 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:calico-20210813002446-820289 Clientid:01:52:54:00:a1:07:a7}
	I0813 00:36:17.258329  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | domain calico-20210813002446-820289 has defined IP address 192.168.50.30 and MAC address 52:54:00:a1:07:a7 in network mk-calico-20210813002446-820289
	I0813 00:36:17.258490  865003 main.go:130] libmachine: Docker is up and running!
	I0813 00:36:17.258510  865003 main.go:130] libmachine: Reticulating splines...
	I0813 00:36:17.258518  865003 client.go:171] LocalClient.Create took 19.401158397s
	I0813 00:36:17.258544  865003 start.go:168] duration metric: libmachine.API.Create for "calico-20210813002446-820289" took 19.40126307s
	I0813 00:36:17.258557  865003 start.go:267] post-start starting for "calico-20210813002446-820289" (driver="kvm2")
	I0813 00:36:17.258567  865003 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 00:36:17.258599  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .DriverName
	I0813 00:36:17.258912  865003 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 00:36:17.258945  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetSSHHostname
	I0813 00:36:17.264229  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | domain calico-20210813002446-820289 has defined MAC address 52:54:00:a1:07:a7 in network mk-calico-20210813002446-820289
	I0813 00:36:17.264640  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:07:a7", ip: ""} in network mk-calico-20210813002446-820289: {Iface:virbr2 ExpiryTime:2021-08-13 01:36:14 +0000 UTC Type:0 Mac:52:54:00:a1:07:a7 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:calico-20210813002446-820289 Clientid:01:52:54:00:a1:07:a7}
	I0813 00:36:17.264666  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | domain calico-20210813002446-820289 has defined IP address 192.168.50.30 and MAC address 52:54:00:a1:07:a7 in network mk-calico-20210813002446-820289
	I0813 00:36:17.264835  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetSSHPort
	I0813 00:36:17.265038  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetSSHKeyPath
	I0813 00:36:17.265177  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetSSHUsername
	I0813 00:36:17.265348  865003 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/calico-20210813002446-820289/id_rsa Username:docker}
	I0813 00:36:17.357841  865003 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 00:36:17.362540  865003 info.go:137] Remote host: Buildroot 2020.02.12
	I0813 00:36:17.362566  865003 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/addons for local assets ...
	I0813 00:36:17.362631  865003 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files for local assets ...
	I0813 00:36:17.362739  865003 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/8202892.pem -> 8202892.pem in /etc/ssl/certs
	I0813 00:36:17.362852  865003 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 00:36:17.370675  865003 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/8202892.pem --> /etc/ssl/certs/8202892.pem (1708 bytes)
	I0813 00:36:17.388979  865003 start.go:270] post-start completed in 130.404571ms
	I0813 00:36:17.389038  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetConfigRaw
	I0813 00:36:17.389789  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetIP
	I0813 00:36:17.395989  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | domain calico-20210813002446-820289 has defined MAC address 52:54:00:a1:07:a7 in network mk-calico-20210813002446-820289
	I0813 00:36:17.396404  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:07:a7", ip: ""} in network mk-calico-20210813002446-820289: {Iface:virbr2 ExpiryTime:2021-08-13 01:36:14 +0000 UTC Type:0 Mac:52:54:00:a1:07:a7 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:calico-20210813002446-820289 Clientid:01:52:54:00:a1:07:a7}
	I0813 00:36:17.396430  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | domain calico-20210813002446-820289 has defined IP address 192.168.50.30 and MAC address 52:54:00:a1:07:a7 in network mk-calico-20210813002446-820289
	I0813 00:36:17.396732  865003 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813002446-820289/config.json ...
	I0813 00:36:17.396933  865003 start.go:129] duration metric: createHost completed in 19.560461026s
	I0813 00:36:17.396951  865003 start.go:80] releasing machines lock for "calico-20210813002446-820289", held for 19.560617342s
	I0813 00:36:17.396993  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .DriverName
	I0813 00:36:17.397205  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetIP
	I0813 00:36:17.401943  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | domain calico-20210813002446-820289 has defined MAC address 52:54:00:a1:07:a7 in network mk-calico-20210813002446-820289
	I0813 00:36:17.402313  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:07:a7", ip: ""} in network mk-calico-20210813002446-820289: {Iface:virbr2 ExpiryTime:2021-08-13 01:36:14 +0000 UTC Type:0 Mac:52:54:00:a1:07:a7 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:calico-20210813002446-820289 Clientid:01:52:54:00:a1:07:a7}
	I0813 00:36:17.402345  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | domain calico-20210813002446-820289 has defined IP address 192.168.50.30 and MAC address 52:54:00:a1:07:a7 in network mk-calico-20210813002446-820289
	I0813 00:36:17.402459  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .DriverName
	I0813 00:36:17.402631  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .DriverName
	I0813 00:36:17.403173  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .DriverName
	I0813 00:36:17.403398  865003 ssh_runner.go:149] Run: systemctl --version
	I0813 00:36:17.403449  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetSSHHostname
	I0813 00:36:17.403455  865003 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 00:36:17.403498  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetSSHHostname
	I0813 00:36:17.409934  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | domain calico-20210813002446-820289 has defined MAC address 52:54:00:a1:07:a7 in network mk-calico-20210813002446-820289
	I0813 00:36:17.410383  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:07:a7", ip: ""} in network mk-calico-20210813002446-820289: {Iface:virbr2 ExpiryTime:2021-08-13 01:36:14 +0000 UTC Type:0 Mac:52:54:00:a1:07:a7 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:calico-20210813002446-820289 Clientid:01:52:54:00:a1:07:a7}
	I0813 00:36:17.410422  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | domain calico-20210813002446-820289 has defined IP address 192.168.50.30 and MAC address 52:54:00:a1:07:a7 in network mk-calico-20210813002446-820289
	I0813 00:36:17.410525  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetSSHPort
	I0813 00:36:17.410745  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetSSHKeyPath
	I0813 00:36:17.410903  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetSSHUsername
	I0813 00:36:17.411030  865003 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/calico-20210813002446-820289/id_rsa Username:docker}
	I0813 00:36:17.411469  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | domain calico-20210813002446-820289 has defined MAC address 52:54:00:a1:07:a7 in network mk-calico-20210813002446-820289
	I0813 00:36:17.411868  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:07:a7", ip: ""} in network mk-calico-20210813002446-820289: {Iface:virbr2 ExpiryTime:2021-08-13 01:36:14 +0000 UTC Type:0 Mac:52:54:00:a1:07:a7 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:calico-20210813002446-820289 Clientid:01:52:54:00:a1:07:a7}
	I0813 00:36:17.411901  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | domain calico-20210813002446-820289 has defined IP address 192.168.50.30 and MAC address 52:54:00:a1:07:a7 in network mk-calico-20210813002446-820289
	I0813 00:36:17.412099  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetSSHPort
	I0813 00:36:17.412277  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetSSHKeyPath
	I0813 00:36:17.412441  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetSSHUsername
	I0813 00:36:17.412570  865003 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/calico-20210813002446-820289/id_rsa Username:docker}
	I0813 00:36:17.507518  865003 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 00:36:17.507619  865003 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 00:36:21.509350  865003 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.001706341s)
	I0813 00:36:21.509481  865003 crio.go:420] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.21.3". assuming images are not preloaded.
	I0813 00:36:21.509547  865003 ssh_runner.go:149] Run: which lz4
	I0813 00:36:21.514587  865003 ssh_runner.go:149] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0813 00:36:21.518876  865003 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0813 00:36:21.518924  865003 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (576184326 bytes)
	I0813 00:36:24.176356  865003 crio.go:362] Took 2.661795 seconds to copy over tarball
	I0813 00:36:24.176427  865003 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0813 00:36:31.074089  865003 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (6.897628424s)
	I0813 00:36:31.074124  865003 crio.go:369] Took 6.897735 seconds t extract the tarball
	I0813 00:36:31.074138  865003 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0813 00:36:31.115630  865003 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 00:36:31.127361  865003 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 00:36:31.137525  865003 docker.go:153] disabling docker service ...
	I0813 00:36:31.137582  865003 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 00:36:31.149600  865003 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 00:36:31.161092  865003 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 00:36:31.306157  865003 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 00:36:31.441751  865003 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 00:36:31.452533  865003 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 00:36:31.467505  865003 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0813 00:36:31.476279  865003 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 00:36:31.483451  865003 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 00:36:31.483504  865003 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 00:36:31.502807  865003 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 00:36:31.510237  865003 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 00:36:31.646643  865003 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 00:36:31.970083  865003 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 00:36:31.970164  865003 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 00:36:31.975365  865003 start.go:417] Will wait 60s for crictl version
	I0813 00:36:31.975430  865003 ssh_runner.go:149] Run: sudo crictl version
	I0813 00:36:32.007760  865003 start.go:426] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.2
	RuntimeApiVersion:  v1alpha1
	I0813 00:36:32.007851  865003 ssh_runner.go:149] Run: crio --version
	I0813 00:36:32.098107  865003 ssh_runner.go:149] Run: crio --version
	I0813 00:36:33.139303  865003 out.go:177] * Preparing Kubernetes v1.21.3 on CRI-O 1.20.2 ...
	I0813 00:36:33.139368  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetIP
	I0813 00:36:33.181209  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | domain calico-20210813002446-820289 has defined MAC address 52:54:00:a1:07:a7 in network mk-calico-20210813002446-820289
	I0813 00:36:33.181644  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:07:a7", ip: ""} in network mk-calico-20210813002446-820289: {Iface:virbr2 ExpiryTime:2021-08-13 01:36:14 +0000 UTC Type:0 Mac:52:54:00:a1:07:a7 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:calico-20210813002446-820289 Clientid:01:52:54:00:a1:07:a7}
	I0813 00:36:33.181686  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | domain calico-20210813002446-820289 has defined IP address 192.168.50.30 and MAC address 52:54:00:a1:07:a7 in network mk-calico-20210813002446-820289
	I0813 00:36:33.181963  865003 ssh_runner.go:149] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0813 00:36:33.186689  865003 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 00:36:33.197275  865003 localpath.go:92] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/client.crt -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813002446-820289/client.crt
	I0813 00:36:33.197393  865003 localpath.go:117] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/client.key -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813002446-820289/client.key
	I0813 00:36:33.197506  865003 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 00:36:33.197552  865003 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 00:36:33.276292  865003 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 00:36:33.276321  865003 crio.go:333] Images already preloaded, skipping extraction
	I0813 00:36:33.276372  865003 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 00:36:33.314839  865003 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 00:36:33.314877  865003 cache_images.go:74] Images are preloaded, skipping loading
	I0813 00:36:33.314961  865003 ssh_runner.go:149] Run: crio config
	I0813 00:36:33.674941  865003 cni.go:93] Creating CNI manager for "calico"
	I0813 00:36:33.674973  865003 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 00:36:33.674990  865003 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.30 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-20210813002446-820289 NodeName:calico-20210813002446-820289 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.30"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.50.30 CgroupDriver:systemd ClientCAFile:/var/l
ib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 00:36:33.675164  865003 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.30
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "calico-20210813002446-820289"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.30
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.30"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 00:36:33.675281  865003 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --hostname-override=calico-20210813002446-820289 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.30 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:calico-20210813002446-820289 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	I0813 00:36:33.675350  865003 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 00:36:33.685219  865003 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 00:36:33.685291  865003 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 00:36:33.693865  865003 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (508 bytes)
	I0813 00:36:33.708375  865003 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 00:36:33.723818  865003 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2072 bytes)
	I0813 00:36:33.743788  865003 ssh_runner.go:149] Run: grep 192.168.50.30	control-plane.minikube.internal$ /etc/hosts
	I0813 00:36:33.749138  865003 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.30	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 00:36:33.761139  865003 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813002446-820289 for IP: 192.168.50.30
	I0813 00:36:33.761208  865003 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key
	I0813 00:36:33.761232  865003 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key
	I0813 00:36:33.761311  865003 certs.go:290] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813002446-820289/client.key
	I0813 00:36:33.761342  865003 certs.go:294] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813002446-820289/apiserver.key.2a920f45
	I0813 00:36:33.761353  865003 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813002446-820289/apiserver.crt.2a920f45 with IP's: [192.168.50.30 10.96.0.1 127.0.0.1 10.0.0.1]
	I0813 00:36:33.902614  865003 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813002446-820289/apiserver.crt.2a920f45 ...
	I0813 00:36:33.902647  865003 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813002446-820289/apiserver.crt.2a920f45: {Name:mkfa6a6f81a5a17a07f9a99a90c593237a6c56d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:36:33.902895  865003 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813002446-820289/apiserver.key.2a920f45 ...
	I0813 00:36:33.902919  865003 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813002446-820289/apiserver.key.2a920f45: {Name:mk76479e61b2db399138df62d0e60f90b0f0819b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:36:33.903046  865003 certs.go:305] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813002446-820289/apiserver.crt.2a920f45 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813002446-820289/apiserver.crt
	I0813 00:36:33.903163  865003 certs.go:309] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813002446-820289/apiserver.key.2a920f45 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813002446-820289/apiserver.key
	I0813 00:36:33.903237  865003 certs.go:294] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813002446-820289/proxy-client.key
	I0813 00:36:33.903259  865003 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813002446-820289/proxy-client.crt with IP's: []
	I0813 00:36:34.051912  865003 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813002446-820289/proxy-client.crt ...
	I0813 00:36:34.051950  865003 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813002446-820289/proxy-client.crt: {Name:mkc76662c92a23777c30badda8e73b720fbd2f97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:36:34.052177  865003 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813002446-820289/proxy-client.key ...
	I0813 00:36:34.052197  865003 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813002446-820289/proxy-client.key: {Name:mkd8355baadc8ec86f11342c30ff9b280476f808 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:36:34.052417  865003 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/820289.pem (1338 bytes)
	W0813 00:36:34.052464  865003 certs.go:369] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/820289_empty.pem, impossibly tiny 0 bytes
	I0813 00:36:34.052483  865003 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 00:36:34.052518  865003 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem (1078 bytes)
	I0813 00:36:34.052551  865003 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem (1123 bytes)
	I0813 00:36:34.052583  865003 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem (1679 bytes)
	I0813 00:36:34.052637  865003 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/8202892.pem (1708 bytes)
	I0813 00:36:34.053813  865003 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813002446-820289/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 00:36:34.077572  865003 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813002446-820289/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 00:36:34.101848  865003 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813002446-820289/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 00:36:34.122642  865003 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813002446-820289/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0813 00:36:34.143842  865003 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 00:36:34.166401  865003 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0813 00:36:34.187273  865003 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 00:36:34.208434  865003 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 00:36:34.227399  865003 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/820289.pem --> /usr/share/ca-certificates/820289.pem (1338 bytes)
	I0813 00:36:34.247564  865003 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/8202892.pem --> /usr/share/ca-certificates/8202892.pem (1708 bytes)
	I0813 00:36:34.266561  865003 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 00:36:34.286549  865003 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 00:36:34.301463  865003 ssh_runner.go:149] Run: openssl version
	I0813 00:36:34.307815  865003 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8202892.pem && ln -fs /usr/share/ca-certificates/8202892.pem /etc/ssl/certs/8202892.pem"
	I0813 00:36:34.316702  865003 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8202892.pem
	I0813 00:36:34.321640  865003 certs.go:416] hashing: -rw-r--r-- 1 root root 1708 Aug 12 23:59 /usr/share/ca-certificates/8202892.pem
	I0813 00:36:34.321695  865003 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8202892.pem
	I0813 00:36:34.329649  865003 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8202892.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 00:36:34.340375  865003 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 00:36:34.351034  865003 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 00:36:34.356588  865003 certs.go:416] hashing: -rw-r--r-- 1 root root 1111 Aug 12 23:51 /usr/share/ca-certificates/minikubeCA.pem
	I0813 00:36:34.356635  865003 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 00:36:34.362976  865003 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 00:36:34.372361  865003 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/820289.pem && ln -fs /usr/share/ca-certificates/820289.pem /etc/ssl/certs/820289.pem"
	I0813 00:36:34.380685  865003 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/820289.pem
	I0813 00:36:34.386106  865003 certs.go:416] hashing: -rw-r--r-- 1 root root 1338 Aug 12 23:59 /usr/share/ca-certificates/820289.pem
	I0813 00:36:34.386155  865003 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/820289.pem
	I0813 00:36:34.392344  865003 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/820289.pem /etc/ssl/certs/51391683.0"
	I0813 00:36:34.400942  865003 kubeadm.go:390] StartCluster: {Name:calico-20210813002446-820289 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12122/minikube-v1.22.0-1628238775-12122.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:calico-202108
13002446-820289 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.30 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 00:36:34.401037  865003 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0813 00:36:34.401080  865003 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 00:36:34.442588  865003 cri.go:76] found id: ""
	I0813 00:36:34.442656  865003 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 00:36:34.452866  865003 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 00:36:34.460168  865003 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 00:36:34.469151  865003 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 00:36:34.469197  865003 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0813 00:36:35.375234  865003 out.go:204]   - Generating certificates and keys ...
	I0813 00:36:38.545330  865003 out.go:204]   - Booting up control plane ...
	I0813 00:36:56.712512  865003 out.go:204]   - Configuring RBAC rules ...
	I0813 00:36:57.552312  865003 cni.go:93] Creating CNI manager for "calico"
	I0813 00:36:57.555151  865003 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I0813 00:36:57.555419  865003 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0813 00:36:57.555443  865003 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (202053 bytes)
	I0813 00:36:57.607576  865003 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0813 00:37:00.297665  865003 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (2.690039995s)
	I0813 00:37:00.297724  865003 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 00:37:00.297844  865003 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:37:00.297938  865003 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=dc1c3ca26e9449ce488a773126b8450402c94a19 minikube.k8s.io/name=calico-20210813002446-820289 minikube.k8s.io/updated_at=2021_08_13T00_37_00_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:37:00.648992  865003 ops.go:34] apiserver oom_adj: -16
	I0813 00:37:00.649086  865003 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:37:01.306217  865003 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:37:01.806291  865003 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:37:02.306414  865003 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:37:02.806430  865003 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:37:03.306593  865003 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:37:03.806520  865003 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:37:04.306257  865003 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:37:04.806386  865003 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:37:05.305702  865003 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:37:05.805595  865003 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:37:06.306495  865003 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:37:06.806254  865003 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:37:07.306671  865003 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:37:07.805867  865003 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:37:08.305716  865003 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:37:08.807894  865003 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:37:09.306365  865003 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:37:09.805779  865003 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:37:10.188792  865003 kubeadm.go:985] duration metric: took 9.890988639s to wait for elevateKubeSystemPrivileges.
	I0813 00:37:10.188833  865003 kubeadm.go:392] StartCluster complete in 35.787899219s
	I0813 00:37:10.188858  865003 settings.go:142] acquiring lock: {Name:mk8798f78c6f0a1d20052a3e99a18e56ee754eb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:37:10.188959  865003 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0813 00:37:10.190953  865003 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig: {Name:mk56dc63045ab5614dcc5cc2eaf1f7d3442c655e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:37:10.912658  865003 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "calico-20210813002446-820289" rescaled to 1
	I0813 00:37:10.912804  865003 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 00:37:10.912973  865003 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0813 00:37:10.913033  865003 addons.go:59] Setting storage-provisioner=true in profile "calico-20210813002446-820289"
	I0813 00:37:10.913050  865003 addons.go:59] Setting default-storageclass=true in profile "calico-20210813002446-820289"
	I0813 00:37:10.913052  865003 addons.go:135] Setting addon storage-provisioner=true in "calico-20210813002446-820289"
	W0813 00:37:10.913062  865003 addons.go:147] addon storage-provisioner should already be in state true
	I0813 00:37:10.913070  865003 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-20210813002446-820289"
	I0813 00:37:10.913101  865003 host.go:66] Checking if "calico-20210813002446-820289" exists ...
	I0813 00:37:10.912810  865003 start.go:226] Will wait 5m0s for node &{Name: IP:192.168.50.30 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 00:37:10.915044  865003 out.go:177] * Verifying Kubernetes components...
	I0813 00:37:10.915124  865003 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 00:37:10.913630  865003 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 00:37:10.915191  865003 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 00:37:10.913635  865003 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 00:37:10.915242  865003 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 00:37:10.938838  865003 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:46105
	I0813 00:37:10.939508  865003 main.go:130] libmachine: () Calling .GetVersion
	I0813 00:37:10.940162  865003 main.go:130] libmachine: Using API Version  1
	I0813 00:37:10.940181  865003 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 00:37:10.940783  865003 main.go:130] libmachine: () Calling .GetMachineName
	I0813 00:37:10.940996  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetState
	I0813 00:37:10.943320  865003 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40593
	I0813 00:37:10.943945  865003 main.go:130] libmachine: () Calling .GetVersion
	I0813 00:37:10.944569  865003 main.go:130] libmachine: Using API Version  1
	I0813 00:37:10.944595  865003 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 00:37:10.945148  865003 main.go:130] libmachine: () Calling .GetMachineName
	I0813 00:37:10.945818  865003 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 00:37:10.945863  865003 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 00:37:10.972588  865003 addons.go:135] Setting addon default-storageclass=true in "calico-20210813002446-820289"
	W0813 00:37:10.972676  865003 addons.go:147] addon default-storageclass should already be in state true
	I0813 00:37:10.972749  865003 host.go:66] Checking if "calico-20210813002446-820289" exists ...
	I0813 00:37:10.973292  865003 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 00:37:10.973387  865003 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 00:37:10.978646  865003 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:46673
	I0813 00:37:10.979333  865003 main.go:130] libmachine: () Calling .GetVersion
	I0813 00:37:10.985632  865003 main.go:130] libmachine: Using API Version  1
	I0813 00:37:10.985657  865003 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 00:37:10.993407  865003 main.go:130] libmachine: () Calling .GetMachineName
	I0813 00:37:10.993418  865003 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43099
	I0813 00:37:10.994059  865003 main.go:130] libmachine: () Calling .GetVersion
	I0813 00:37:10.994081  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetState
	I0813 00:37:10.994624  865003 main.go:130] libmachine: Using API Version  1
	I0813 00:37:10.994642  865003 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 00:37:10.995042  865003 main.go:130] libmachine: () Calling .GetMachineName
	I0813 00:37:10.996103  865003 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 00:37:10.996142  865003 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 00:37:10.998304  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .DriverName
	I0813 00:37:11.000509  865003 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 00:37:11.000696  865003 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 00:37:11.000736  865003 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 00:37:11.000773  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetSSHHostname
	I0813 00:37:11.006529  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | domain calico-20210813002446-820289 has defined MAC address 52:54:00:a1:07:a7 in network mk-calico-20210813002446-820289
	I0813 00:37:11.007101  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:07:a7", ip: ""} in network mk-calico-20210813002446-820289: {Iface:virbr2 ExpiryTime:2021-08-13 01:36:14 +0000 UTC Type:0 Mac:52:54:00:a1:07:a7 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:calico-20210813002446-820289 Clientid:01:52:54:00:a1:07:a7}
	I0813 00:37:11.007131  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | domain calico-20210813002446-820289 has defined IP address 192.168.50.30 and MAC address 52:54:00:a1:07:a7 in network mk-calico-20210813002446-820289
	I0813 00:37:11.007314  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetSSHPort
	I0813 00:37:11.015336  865003 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40911
	I0813 00:37:11.015795  865003 main.go:130] libmachine: () Calling .GetVersion
	I0813 00:37:11.015905  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetSSHKeyPath
	I0813 00:37:11.016070  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetSSHUsername
	I0813 00:37:11.016194  865003 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/calico-20210813002446-820289/id_rsa Username:docker}
	I0813 00:37:11.016679  865003 main.go:130] libmachine: Using API Version  1
	I0813 00:37:11.016698  865003 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 00:37:11.017015  865003 main.go:130] libmachine: () Calling .GetMachineName
	I0813 00:37:11.017222  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetState
	I0813 00:37:11.020293  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .DriverName
	I0813 00:37:11.020500  865003 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 00:37:11.020518  865003 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 00:37:11.020540  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetSSHHostname
	I0813 00:37:11.026184  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | domain calico-20210813002446-820289 has defined MAC address 52:54:00:a1:07:a7 in network mk-calico-20210813002446-820289
	I0813 00:37:11.026645  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetSSHPort
	I0813 00:37:11.026696  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:07:a7", ip: ""} in network mk-calico-20210813002446-820289: {Iface:virbr2 ExpiryTime:2021-08-13 01:36:14 +0000 UTC Type:0 Mac:52:54:00:a1:07:a7 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:calico-20210813002446-820289 Clientid:01:52:54:00:a1:07:a7}
	I0813 00:37:11.026710  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | domain calico-20210813002446-820289 has defined IP address 192.168.50.30 and MAC address 52:54:00:a1:07:a7 in network mk-calico-20210813002446-820289
	I0813 00:37:11.026812  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetSSHKeyPath
	I0813 00:37:11.026938  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .GetSSHUsername
	I0813 00:37:11.027036  865003 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/calico-20210813002446-820289/id_rsa Username:docker}
	I0813 00:37:11.152565  865003 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 00:37:11.152625  865003 node_ready.go:35] waiting up to 5m0s for node "calico-20210813002446-820289" to be "Ready" ...
	I0813 00:37:11.163861  865003 node_ready.go:49] node "calico-20210813002446-820289" has status "Ready":"True"
	I0813 00:37:11.163883  865003 node_ready.go:38] duration metric: took 11.228464ms waiting for node "calico-20210813002446-820289" to be "Ready" ...
	I0813 00:37:11.163894  865003 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 00:37:11.188102  865003 pod_ready.go:78] waiting up to 5m0s for pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace to be "Ready" ...
	I0813 00:37:11.188472  865003 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 00:37:11.259751  865003 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 00:37:12.630830  865003 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.478217813s)
	I0813 00:37:12.630937  865003 start.go:736] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS
	I0813 00:37:12.937983  865003 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.67818335s)
	I0813 00:37:12.938061  865003 main.go:130] libmachine: Making call to close driver server
	I0813 00:37:12.938091  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .Close
	I0813 00:37:12.938387  865003 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.749879041s)
	I0813 00:37:12.938443  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | Closing plugin on server side
	I0813 00:37:12.938461  865003 main.go:130] libmachine: Making call to close driver server
	I0813 00:37:12.938482  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .Close
	I0813 00:37:12.938558  865003 main.go:130] libmachine: Successfully made call to close driver server
	I0813 00:37:12.938568  865003 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 00:37:12.938582  865003 main.go:130] libmachine: Making call to close driver server
	I0813 00:37:12.938590  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .Close
	I0813 00:37:12.938780  865003 main.go:130] libmachine: Successfully made call to close driver server
	I0813 00:37:12.938809  865003 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 00:37:12.938819  865003 main.go:130] libmachine: Making call to close driver server
	I0813 00:37:12.938828  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .Close
	I0813 00:37:12.938783  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | Closing plugin on server side
	I0813 00:37:12.940095  865003 main.go:130] libmachine: Successfully made call to close driver server
	I0813 00:37:12.940127  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | Closing plugin on server side
	I0813 00:37:12.940165  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | Closing plugin on server side
	I0813 00:37:12.940200  865003 main.go:130] libmachine: Successfully made call to close driver server
	I0813 00:37:12.940213  865003 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 00:37:12.940225  865003 main.go:130] libmachine: Making call to close driver server
	I0813 00:37:12.940241  865003 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 00:37:12.940309  865003 main.go:130] libmachine: (calico-20210813002446-820289) Calling .Close
	I0813 00:37:12.940542  865003 main.go:130] libmachine: (calico-20210813002446-820289) DBG | Closing plugin on server side
	I0813 00:37:12.940555  865003 main.go:130] libmachine: Successfully made call to close driver server
	I0813 00:37:12.940564  865003 main.go:130] libmachine: Making call to close connection to plugin binary
	I0813 00:37:12.942332  865003 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0813 00:37:12.942363  865003 addons.go:344] enableAddons completed in 2.029391099s
	I0813 00:37:13.244919  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:37:19.636908  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:37:21.871907  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:37:24.222557  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:37:26.715406  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:37:28.718063  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:37:30.721347  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:37:33.228383  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:37:35.718201  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:37:37.727637  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:37:40.217489  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:37:42.221050  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:37:44.727139  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:37:47.234186  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:37:49.717849  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:37:51.721929  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:37:54.223606  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:37:56.717025  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:37:58.723153  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:38:00.724107  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:38:03.219909  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:38:05.717809  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:38:08.220561  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:38:10.718298  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:38:12.719395  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:38:15.219021  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:38:17.227248  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:38:19.718948  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:38:21.719928  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:38:23.720976  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:38:26.225122  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:38:28.715171  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:38:31.215939  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:38:33.230204  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:38:35.737396  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:38:38.219611  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:38:43.673684  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:38:45.721562  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:38:47.722257  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:38:49.722464  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:38:52.235115  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:38:54.719129  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:38:57.224375  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:38:59.716578  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:39:01.719099  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:39:03.727250  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:39:08.813456  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:39:12.248651  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:39:14.719878  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:39:17.229467  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:39:19.230401  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:39:21.716778  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:39:23.730962  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:39:26.220443  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:39:28.725950  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:39:31.218733  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:39:33.222175  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:39:35.233240  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:39:37.234751  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:39:39.726298  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:39:42.218607  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:39:45.888487  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:39:48.752500  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:39:51.222847  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:39:53.225189  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:39:55.716089  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:39:57.716529  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:40:00.219418  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:40:02.222042  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:40:04.718550  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:40:07.220536  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:40:09.222901  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:40:11.224598  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:40:13.718231  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:40:15.718571  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:40:17.718962  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:40:20.217683  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:40:22.217937  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:40:24.717217  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:40:27.225039  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:40:29.226968  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:40:31.717824  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:40:33.722977  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:40:35.730836  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:40:38.221608  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:40:40.720212  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:40:43.236408  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:40:45.714268  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:40:47.715159  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:40:49.716408  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:40:51.719014  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:40:54.213832  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:40:56.216389  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:40:58.717895  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:41:01.222230  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:41:03.718328  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:41:06.217095  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:41:08.222163  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:41:10.716757  865003 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace has status "Ready":"False"
	I0813 00:41:11.226982  865003 pod_ready.go:81] duration metric: took 4m0.038844988s waiting for pod "calico-kube-controllers-85ff9ff759-5x452" in "kube-system" namespace to be "Ready" ...
	E0813 00:41:11.227005  865003 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0813 00:41:11.227014  865003 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-828zm" in "kube-system" namespace to be "Ready" ...
	I0813 00:41:13.251945  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:41:15.744000  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:41:17.745717  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:41:20.244912  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:41:22.245794  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:41:24.252898  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:41:26.745305  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:41:28.745946  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:41:30.748780  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:41:33.245261  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:41:35.296038  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:41:37.747689  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:41:39.753158  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:41:42.248802  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:41:44.745383  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:41:46.746798  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:41:49.247165  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:41:51.746248  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:41:54.243921  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:41:56.245780  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:41:58.746996  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:42:01.246085  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:42:03.748122  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:42:06.245003  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:42:08.252004  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:42:10.746834  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:42:12.746903  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:42:15.247923  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:42:17.745404  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:42:19.746656  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:42:22.245976  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:42:24.246583  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:42:26.247666  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:42:28.745840  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:42:31.250066  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:42:33.747262  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:42:35.747667  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:42:38.248493  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:42:40.747517  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:42:43.247678  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:42:45.251369  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:42:47.745564  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:42:49.746294  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:42:51.748449  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:42:54.247518  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:42:56.250939  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:42:58.744435  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:43:00.746627  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:43:03.250493  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:43:05.784911  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:43:08.244318  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:43:10.250691  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:43:12.250741  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:43:14.747277  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:43:17.245442  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:43:19.252288  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:43:21.745075  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:43:23.746201  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:43:25.748994  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:43:28.244538  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:43:30.245346  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:43:32.248346  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:43:34.746549  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:43:36.746652  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:43:38.747044  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:43:40.763780  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:43:43.244979  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:43:45.247059  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:43:47.746262  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:43:50.251539  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:43:52.744918  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:43:54.750021  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:43:56.751910  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:43:59.243912  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:44:01.248357  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:44:03.747472  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:44:06.252674  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:44:08.752180  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:44:11.245142  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:44:13.248472  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:44:15.251356  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:44:17.745055  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:44:19.747100  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:44:22.245990  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:44:24.252202  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:44:26.744313  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:44:28.744920  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:44:30.745400  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:44:32.748021  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:44:35.249203  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:44:37.749333  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:44:40.244797  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:44:42.245069  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:44:44.245738  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:44:46.248956  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:44:48.747126  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:44:51.245406  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:44:53.245935  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:44:55.246411  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:44:57.246660  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:44:59.248640  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:45:01.744216  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:45:03.745219  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:45:05.745526  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:45:07.747912  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:45:10.250848  865003 pod_ready.go:102] pod "calico-node-828zm" in "kube-system" namespace has status "Ready":"False"
	I0813 00:45:11.258548  865003 pod_ready.go:81] duration metric: took 4m0.031517333s waiting for pod "calico-node-828zm" in "kube-system" namespace to be "Ready" ...
	E0813 00:45:11.258581  865003 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0813 00:45:11.258601  865003 pod_ready.go:38] duration metric: took 8m0.094692627s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 00:45:11.261259  865003 out.go:177] 
	W0813 00:45:11.261433  865003 out.go:242] X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	W0813 00:45:11.261447  865003 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0813 00:45:11.263405  865003 out.go:242] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                                         │
	│                                                                                                                                                       │
	│    * Please attach the following file to the GitHub issue:                                                                                            │
	│    * - /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/logs/lastStart.txt    │
	│                                                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                                         │
	│                                                                                                                                                       │
	│    * Please attach the following file to the GitHub issue:                                                                                            │
	│    * - /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/logs/lastStart.txt    │
	│                                                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0813 00:45:11.265007  865003 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:100: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (553.70s)

                                                
                                    

Test pass (230/263)

Order passed test Duration
3 TestDownloadOnly/v1.14.0/json-events 8.61
4 TestDownloadOnly/v1.14.0/preload-exists 0
8 TestDownloadOnly/v1.14.0/LogsDuration 0.07
10 TestDownloadOnly/v1.21.3/json-events 6.26
11 TestDownloadOnly/v1.21.3/preload-exists 0
15 TestDownloadOnly/v1.21.3/LogsDuration 0.06
17 TestDownloadOnly/v1.22.0-rc.0/json-events 9.35
18 TestDownloadOnly/v1.22.0-rc.0/preload-exists 0
22 TestDownloadOnly/v1.22.0-rc.0/LogsDuration 0.07
23 TestDownloadOnly/DeleteAll 0.22
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.22
26 TestOffline 100.27
29 TestAddons/parallel/Registry 14.7
31 TestAddons/parallel/MetricsServer 5.7
32 TestAddons/parallel/HelmTiller 14.8
33 TestAddons/parallel/Olm 72.58
34 TestAddons/parallel/CSI 69.2
35 TestAddons/parallel/GCPAuth 70.41
36 TestCertOptions 83.88
38 TestForceSystemdFlag 77.45
39 TestForceSystemdEnv 95.25
40 TestKVMDriverInstallOrUpdate 6.31
44 TestErrorSpam/setup 53.4
45 TestErrorSpam/start 0.42
46 TestErrorSpam/status 0.73
47 TestErrorSpam/pause 3.43
48 TestErrorSpam/unpause 1.82
49 TestErrorSpam/stop 5.26
52 TestFunctional/serial/CopySyncFile 0
53 TestFunctional/serial/StartWithProxy 77.65
54 TestFunctional/serial/AuditLog 0
55 TestFunctional/serial/SoftStart 6.02
56 TestFunctional/serial/KubeContext 0.04
57 TestFunctional/serial/KubectlGetPods 0.23
60 TestFunctional/serial/CacheCmd/cache/add_remote 3.37
61 TestFunctional/serial/CacheCmd/cache/add_local 1.61
62 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.05
63 TestFunctional/serial/CacheCmd/cache/list 0.05
64 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
65 TestFunctional/serial/CacheCmd/cache/cache_reload 1.97
66 TestFunctional/serial/CacheCmd/cache/delete 0.11
67 TestFunctional/serial/MinikubeKubectlCmd 0.11
68 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
69 TestFunctional/serial/ExtraConfig 33.28
70 TestFunctional/serial/ComponentHealth 0.07
71 TestFunctional/serial/LogsCmd 1.4
72 TestFunctional/serial/LogsFileCmd 1.4
74 TestFunctional/parallel/ConfigCmd 0.38
75 TestFunctional/parallel/DashboardCmd 6.09
76 TestFunctional/parallel/DryRun 0.33
77 TestFunctional/parallel/InternationalLanguage 0.16
78 TestFunctional/parallel/StatusCmd 1.18
81 TestFunctional/parallel/ServiceCmd 16.82
82 TestFunctional/parallel/AddonsCmd 0.16
83 TestFunctional/parallel/PersistentVolumeClaim 47.06
85 TestFunctional/parallel/SSHCmd 0.44
86 TestFunctional/parallel/CpCmd 0.44
87 TestFunctional/parallel/MySQL 34.54
88 TestFunctional/parallel/FileSync 0.39
89 TestFunctional/parallel/CertSync 1.27
93 TestFunctional/parallel/NodeLabels 0.07
94 TestFunctional/parallel/LoadImage 2.82
95 TestFunctional/parallel/RemoveImage 2.97
96 TestFunctional/parallel/LoadImageFromFile 5.55
97 TestFunctional/parallel/BuildImage 8.35
98 TestFunctional/parallel/ListImages 0.5
99 TestFunctional/parallel/NonActiveRuntimeDisabled 0.42
101 TestFunctional/parallel/Version/short 0.06
102 TestFunctional/parallel/Version/components 0.63
103 TestFunctional/parallel/ProfileCmd/profile_not_create 0.47
104 TestFunctional/parallel/MountCmd/any-port 13.92
105 TestFunctional/parallel/ProfileCmd/profile_list 0.31
106 TestFunctional/parallel/ProfileCmd/profile_json_output 0.31
107 TestFunctional/parallel/MountCmd/specific-port 1.74
109 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
111 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
112 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
113 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
114 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
115 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
119 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
120 TestFunctional/delete_busybox_image 0.08
121 TestFunctional/delete_my-image_image 0.04
122 TestFunctional/delete_minikube_cached_images 0.03
126 TestJSONOutput/start/Audit 0
128 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
129 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
131 TestJSONOutput/pause/Audit 0
133 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
134 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
136 TestJSONOutput/unpause/Audit 0
138 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
139 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
141 TestJSONOutput/stop/Audit 0
143 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
144 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
145 TestErrorJSONOutput 0.32
148 TestMainNoArgs 0.05
151 TestMultiNode/serial/FreshStart2Nodes 119.18
154 TestMultiNode/serial/AddNode 52.91
155 TestMultiNode/serial/ProfileList 0.23
156 TestMultiNode/serial/CopyFile 1.78
157 TestMultiNode/serial/StopNode 2.93
158 TestMultiNode/serial/StartAfterStop 48.52
159 TestMultiNode/serial/RestartKeepsNodes 176.55
160 TestMultiNode/serial/DeleteNode 1.88
161 TestMultiNode/serial/StopMultiNode 5.29
162 TestMultiNode/serial/RestartMultiNode 115.46
163 TestMultiNode/serial/ValidateNameConflict 61.98
169 TestDebPackageInstall/install_amd64_debian:sid/minikube 0
170 TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver 10.88
172 TestDebPackageInstall/install_amd64_debian:latest/minikube 0
173 TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver 9.96
175 TestDebPackageInstall/install_amd64_debian:10/minikube 0
176 TestDebPackageInstall/install_amd64_debian:10/kvm2-driver 9.82
178 TestDebPackageInstall/install_amd64_debian:9/minikube 0
179 TestDebPackageInstall/install_amd64_debian:9/kvm2-driver 8.11
181 TestDebPackageInstall/install_amd64_ubuntu:latest/minikube 0
182 TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver 16.02
184 TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube 0
185 TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver 14.69
187 TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube 0
188 TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver 18.12
190 TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube 0
191 TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver 18.18
194 TestScheduledStopUnix 88.06
198 TestRunningBinaryUpgrade 249.01
200 TestKubernetesUpgrade 178.72
203 TestPause/serial/Start 93.95
218 TestNetworkPlugins/group/false 0.41
223 TestStartStop/group/old-k8s-version/serial/FirstStart 140.6
224 TestPause/serial/SecondStartNoReconfiguration 48.1
225 TestPause/serial/Pause 0.98
226 TestPause/serial/VerifyStatus 0.28
227 TestPause/serial/Unpause 0.94
228 TestPause/serial/PauseAgain 5.95
229 TestPause/serial/DeletePaused 1.66
230 TestPause/serial/VerifyDeletedResources 0.63
232 TestStartStop/group/no-preload/serial/FirstStart 167.27
234 TestStartStop/group/embed-certs/serial/FirstStart 114.07
235 TestStartStop/group/old-k8s-version/serial/DeployApp 10.76
236 TestStoppedBinaryUpgrade/MinikubeLogs 1.29
237 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.58
239 TestStartStop/group/default-k8s-different-port/serial/FirstStart 94.15
240 TestStartStop/group/old-k8s-version/serial/Stop 10.14
241 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
242 TestStartStop/group/old-k8s-version/serial/SecondStart 86.93
243 TestStartStop/group/embed-certs/serial/DeployApp 11.76
244 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.4
245 TestStartStop/group/embed-certs/serial/Stop 4.15
246 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
247 TestStartStop/group/embed-certs/serial/SecondStart 400.06
248 TestStartStop/group/default-k8s-different-port/serial/DeployApp 11.02
249 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 8.27
250 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 1.84
251 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
252 TestStartStop/group/default-k8s-different-port/serial/Stop 5.4
253 TestStartStop/group/no-preload/serial/DeployApp 13.94
254 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
255 TestStartStop/group/old-k8s-version/serial/Pause 4.97
256 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.96
257 TestStartStop/group/default-k8s-different-port/serial/SecondStart 389.89
259 TestStartStop/group/newest-cni/serial/FirstStart 96.16
260 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.31
261 TestStartStop/group/no-preload/serial/Stop 93.54
262 TestStartStop/group/newest-cni/serial/DeployApp 0
263 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.36
264 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
265 TestStartStop/group/no-preload/serial/SecondStart 416.91
266 TestStartStop/group/newest-cni/serial/Stop 63.44
267 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.16
268 TestStartStop/group/newest-cni/serial/SecondStart 74.54
269 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
270 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
271 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.33
272 TestStartStop/group/newest-cni/serial/Pause 2.85
273 TestNetworkPlugins/group/auto/Start 78.92
274 TestNetworkPlugins/group/auto/KubeletFlags 0.24
275 TestNetworkPlugins/group/auto/NetCatPod 10.5
276 TestNetworkPlugins/group/auto/DNS 0.27
277 TestNetworkPlugins/group/auto/Localhost 0.24
278 TestNetworkPlugins/group/auto/HairPin 0.27
279 TestNetworkPlugins/group/kindnet/Start 87.11
280 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 7.03
281 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.26
282 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.29
283 TestStartStop/group/embed-certs/serial/Pause 3.08
284 TestNetworkPlugins/group/cilium/Start 188.92
285 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 5.03
286 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 5.14
287 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 0.29
288 TestStartStop/group/default-k8s-different-port/serial/Pause 3.11
290 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
291 TestNetworkPlugins/group/kindnet/KubeletFlags 0.26
292 TestNetworkPlugins/group/kindnet/NetCatPod 17.6
293 TestNetworkPlugins/group/kindnet/DNS 0.24
294 TestNetworkPlugins/group/kindnet/Localhost 0.23
295 TestNetworkPlugins/group/kindnet/HairPin 0.21
296 TestNetworkPlugins/group/enable-default-cni/Start 100.38
297 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.04
298 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.13
299 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.29
300 TestStartStop/group/no-preload/serial/Pause 3
301 TestNetworkPlugins/group/flannel/Start 94.44
302 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
303 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.57
304 TestNetworkPlugins/group/enable-default-cni/DNS 0.31
305 TestNetworkPlugins/group/enable-default-cni/Localhost 0.25
306 TestNetworkPlugins/group/enable-default-cni/HairPin 0.27
307 TestNetworkPlugins/group/bridge/Start 97.58
308 TestNetworkPlugins/group/cilium/ControllerPod 5.03
309 TestNetworkPlugins/group/cilium/KubeletFlags 0.22
310 TestNetworkPlugins/group/cilium/NetCatPod 12.49
311 TestNetworkPlugins/group/cilium/DNS 0.38
312 TestNetworkPlugins/group/cilium/Localhost 0.26
313 TestNetworkPlugins/group/cilium/HairPin 0.29
314 TestNetworkPlugins/group/custom-weave/Start 87.67
315 TestNetworkPlugins/group/flannel/ControllerPod 5.02
316 TestNetworkPlugins/group/flannel/KubeletFlags 1.22
317 TestNetworkPlugins/group/flannel/NetCatPod 12.65
318 TestNetworkPlugins/group/flannel/DNS 0.28
319 TestNetworkPlugins/group/flannel/Localhost 0.27
320 TestNetworkPlugins/group/flannel/HairPin 0.25
321 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
322 TestNetworkPlugins/group/bridge/NetCatPod 11.54
323 TestNetworkPlugins/group/bridge/DNS 0.28
324 TestNetworkPlugins/group/bridge/Localhost 0.23
325 TestNetworkPlugins/group/bridge/HairPin 0.25
326 TestNetworkPlugins/group/custom-weave/KubeletFlags 0.21
327 TestNetworkPlugins/group/custom-weave/NetCatPod 10.5
x
+
TestDownloadOnly/v1.14.0/json-events (8.61s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210812235004-820289 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210812235004-820289 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (8.606580716s)
--- PASS: TestDownloadOnly/v1.14.0/json-events (8.61s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/preload-exists
--- PASS: TestDownloadOnly/v1.14.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20210812235004-820289
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20210812235004-820289: exit status 85 (65.659187ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/12 23:50:04
	Running on machine: debian-jenkins-agent-14
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0812 23:50:04.243368  820301 out.go:298] Setting OutFile to fd 1 ...
	I0812 23:50:04.243453  820301 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0812 23:50:04.243461  820301 out.go:311] Setting ErrFile to fd 2...
	I0812 23:50:04.243464  820301 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0812 23:50:04.243558  820301 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	W0812 23:50:04.243655  820301 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/config/config.json: no such file or directory
	I0812 23:50:04.243903  820301 out.go:305] Setting JSON to true
	I0812 23:50:04.281002  820301 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":12767,"bootTime":1628799437,"procs":156,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0812 23:50:04.281117  820301 start.go:121] virtualization: kvm guest
	I0812 23:50:04.284339  820301 notify.go:169] Checking for updates...
	I0812 23:50:04.286367  820301 driver.go:335] Setting default libvirt URI to qemu:///system
	I0812 23:50:04.315273  820301 start.go:278] selected driver: kvm2
	I0812 23:50:04.315289  820301 start.go:751] validating driver "kvm2" against <nil>
	I0812 23:50:04.316180  820301 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 23:50:04.316384  820301 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0812 23:50:04.327182  820301 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0812 23:50:04.327229  820301 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0812 23:50:04.327685  820301 start_flags.go:344] Using suggested 6000MB memory alloc based on sys=32179MB, container=0MB
	I0812 23:50:04.327802  820301 start_flags.go:679] Wait components to verify : map[apiserver:true system_pods:true]
	I0812 23:50:04.327854  820301 cni.go:93] Creating CNI manager for ""
	I0812 23:50:04.327862  820301 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0812 23:50:04.327876  820301 start_flags.go:272] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0812 23:50:04.327886  820301 start_flags.go:277] config:
	{Name:download-only-20210812235004-820289 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:download-only-20210812235004-820289 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0812 23:50:04.328037  820301 iso.go:123] acquiring lock: {Name:mk52748db467e5aa4b344902ee09c9ea40467a67 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 23:50:04.330068  820301 download.go:92] Downloading: https://storage.googleapis.com/minikube-builds/iso/12122/minikube-v1.22.0-1628238775-12122.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/12122/minikube-v1.22.0-1628238775-12122.iso.sha256 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/iso/minikube-v1.22.0-1628238775-12122.iso
	I0812 23:50:06.834439  820301 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime crio
	I0812 23:50:06.858483  820301 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.14.0-cri-o-overlay-amd64.tar.lz4
	I0812 23:50:06.858526  820301 cache.go:56] Caching tarball of preloaded images
	I0812 23:50:06.858705  820301 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime crio
	I0812 23:50:06.860814  820301 preload.go:237] getting checksum for preloaded-images-k8s-v11-v1.14.0-cri-o-overlay-amd64.tar.lz4 ...
	I0812 23:50:06.887606  820301 download.go:92] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.14.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:70b8731eaaa1b4de2d1cd60021fc1260 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-cri-o-overlay-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210812235004-820289"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.14.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/json-events (6.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210812235004-820289 --force --alsologtostderr --kubernetes-version=v1.21.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210812235004-820289 --force --alsologtostderr --kubernetes-version=v1.21.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (6.257393552s)
--- PASS: TestDownloadOnly/v1.21.3/json-events (6.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/preload-exists
--- PASS: TestDownloadOnly/v1.21.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20210812235004-820289
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20210812235004-820289: exit status 85 (63.886622ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/12 23:50:12
	Running on machine: debian-jenkins-agent-14
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0812 23:50:12.918435  820337 out.go:298] Setting OutFile to fd 1 ...
	I0812 23:50:12.918525  820337 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0812 23:50:12.918533  820337 out.go:311] Setting ErrFile to fd 2...
	I0812 23:50:12.918536  820337 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0812 23:50:12.918631  820337 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	W0812 23:50:12.918744  820337 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/config/config.json: no such file or directory
	I0812 23:50:12.918845  820337 out.go:305] Setting JSON to true
	I0812 23:50:12.953689  820337 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":12776,"bootTime":1628799437,"procs":156,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0812 23:50:12.953778  820337 start.go:121] virtualization: kvm guest
	I0812 23:50:12.956177  820337 notify.go:169] Checking for updates...
	W0812 23:50:12.958258  820337 start.go:659] api.Load failed for download-only-20210812235004-820289: filestore "download-only-20210812235004-820289": Docker machine "download-only-20210812235004-820289" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0812 23:50:12.958309  820337 driver.go:335] Setting default libvirt URI to qemu:///system
	W0812 23:50:12.958352  820337 start.go:659] api.Load failed for download-only-20210812235004-820289: filestore "download-only-20210812235004-820289": Docker machine "download-only-20210812235004-820289" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0812 23:50:12.986893  820337 start.go:278] selected driver: kvm2
	I0812 23:50:12.986907  820337 start.go:751] validating driver "kvm2" against &{Name:download-only-20210812235004-820289 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12122/minikube-v1.22.0-1628238775-12122.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0
ClusterName:download-only-20210812235004-820289 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0812 23:50:12.987568  820337 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 23:50:12.987757  820337 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0812 23:50:12.998552  820337 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0812 23:50:12.999161  820337 cni.go:93] Creating CNI manager for ""
	I0812 23:50:12.999177  820337 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0812 23:50:12.999183  820337 start_flags.go:277] config:
	{Name:download-only-20210812235004-820289 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12122/minikube-v1.22.0-1628238775-12122.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:download-only-20210812235004-820289 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0812 23:50:12.999311  820337 iso.go:123] acquiring lock: {Name:mk52748db467e5aa4b344902ee09c9ea40467a67 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 23:50:13.000975  820337 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0812 23:50:13.024853  820337 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4
	I0812 23:50:13.024878  820337 cache.go:56] Caching tarball of preloaded images
	I0812 23:50:13.025028  820337 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0812 23:50:13.026731  820337 preload.go:237] getting checksum for preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 ...
	I0812 23:50:13.065633  820337 download.go:92] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:5b844d0f443dc130a4f324a367701516 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4
	I0812 23:50:17.361080  820337 preload.go:247] saving checksum for preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 ...
	I0812 23:50:17.361188  820337 preload.go:254] verifying checksumm of /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210812235004-820289"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.21.3/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/json-events (9.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210812235004-820289 --force --alsologtostderr --kubernetes-version=v1.22.0-rc.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210812235004-820289 --force --alsologtostderr --kubernetes-version=v1.22.0-rc.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (9.345760756s)
--- PASS: TestDownloadOnly/v1.22.0-rc.0/json-events (9.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.22.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20210812235004-820289
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20210812235004-820289: exit status 85 (65.160537ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/12 23:50:19
	Running on machine: debian-jenkins-agent-14
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0812 23:50:19.240089  820373 out.go:298] Setting OutFile to fd 1 ...
	I0812 23:50:19.240154  820373 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0812 23:50:19.240159  820373 out.go:311] Setting ErrFile to fd 2...
	I0812 23:50:19.240165  820373 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0812 23:50:19.240270  820373 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	W0812 23:50:19.240373  820373 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/config/config.json: no such file or directory
	I0812 23:50:19.240480  820373 out.go:305] Setting JSON to true
	I0812 23:50:19.275952  820373 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":12782,"bootTime":1628799437,"procs":156,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0812 23:50:19.276324  820373 start.go:121] virtualization: kvm guest
	I0812 23:50:19.278584  820373 notify.go:169] Checking for updates...
	W0812 23:50:19.280737  820373 start.go:659] api.Load failed for download-only-20210812235004-820289: filestore "download-only-20210812235004-820289": Docker machine "download-only-20210812235004-820289" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0812 23:50:19.280781  820373 driver.go:335] Setting default libvirt URI to qemu:///system
	W0812 23:50:19.280807  820373 start.go:659] api.Load failed for download-only-20210812235004-820289: filestore "download-only-20210812235004-820289": Docker machine "download-only-20210812235004-820289" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0812 23:50:19.309244  820373 start.go:278] selected driver: kvm2
	I0812 23:50:19.309260  820373 start.go:751] validating driver "kvm2" against &{Name:download-only-20210812235004-820289 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12122/minikube-v1.22.0-1628238775-12122.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3
ClusterName:download-only-20210812235004-820289 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0812 23:50:19.310133  820373 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 23:50:19.310314  820373 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0812 23:50:19.321043  820373 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0812 23:50:19.321693  820373 cni.go:93] Creating CNI manager for ""
	I0812 23:50:19.321707  820373 cni.go:163] "kvm2" driver + crio runtime found, recommending bridge
	I0812 23:50:19.321714  820373 start_flags.go:277] config:
	{Name:download-only-20210812235004-820289 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12122/minikube-v1.22.0-1628238775-12122.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:download-only-20210812235004-820289 Namespace:default APIServ
erName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0812 23:50:19.321840  820373 iso.go:123] acquiring lock: {Name:mk52748db467e5aa4b344902ee09c9ea40467a67 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 23:50:19.323436  820373 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0812 23:50:19.345991  820373 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0812 23:50:19.346037  820373 cache.go:56] Caching tarball of preloaded images
	I0812 23:50:19.346195  820373 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0812 23:50:19.347936  820373 preload.go:237] getting checksum for preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4 ...
	I0812 23:50:19.375380  820373 download.go:92] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:c7902b63f7bbc786f5f337da25a17477 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210812235004-820289"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.22.0-rc.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:189: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:201: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-20210812235004-820289
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestOffline (100.27s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-20210813002036-820289 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-20210813002036-820289 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m39.231474275s)
helpers_test.go:176: Cleaning up "offline-crio-20210813002036-820289" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-20210813002036-820289
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-20210813002036-820289: (1.0409664s)
--- PASS: TestOffline (100.27s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.7s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:284: registry stabilized in 16.658587ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:286: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:343: "registry-z6f8h" [6005aae5-98e4-43e4-851a-f9a9aa55d491] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:286: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.016931195s
addons_test.go:289: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:343: "registry-proxy-qb8vd" [c6fcdaae-c81a-490b-8cfa-b7dd428f45a4] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:289: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.017816858s
addons_test.go:294: (dbg) Run:  kubectl --context addons-20210812235029-820289 delete po -l run=registry-test --now

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:299: (dbg) Run:  kubectl --context addons-20210812235029-820289 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:299: (dbg) Done: kubectl --context addons-20210812235029-820289 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.940260257s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210812235029-820289 ip

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210812235029-820289 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.70s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.7s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:361: metrics-server stabilized in 16.836701ms

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:363: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
helpers_test.go:343: "metrics-server-77c99ccb96-r4lnv" [4dd7c87f-48e5-4ffe-bb56-0130751d4aac] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:363: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.030034972s
addons_test.go:369: (dbg) Run:  kubectl --context addons-20210812235029-820289 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:386: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210812235029-820289 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.70s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (14.8s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:410: tiller-deploy stabilized in 3.830469ms
addons_test.go:412: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:343: "tiller-deploy-768d69497-p5rbk" [32966f96-4891-49a3-86a8-cfc0e2266a9f] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:412: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.023095934s
addons_test.go:427: (dbg) Run:  kubectl --context addons-20210812235029-820289 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:427: (dbg) Done: kubectl --context addons-20210812235029-820289 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: (9.100515218s)
addons_test.go:432: kubectl --context addons-20210812235029-820289 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
Error attaching, falling back to logs: unable to upgrade connection: container helm-test not found in pod helm-test_kube-system
addons_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210812235029-820289 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (14.80s)

                                                
                                    
x
+
TestAddons/parallel/Olm (72.58s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:463: catalog-operator stabilized in 16.886351ms

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:467: olm-operator stabilized in 20.198277ms

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:471: packageserver stabilized in 24.537631ms
addons_test.go:473: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "app=catalog-operator" in namespace "olm" ...

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:343: "catalog-operator-75d496484d-x22rr" [9ef3c8dc-b0a8-41e1-a9be-ad8ff5a7cd49] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:473: (dbg) TestAddons/parallel/Olm: app=catalog-operator healthy within 5.016760409s
addons_test.go:476: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "app=olm-operator" in namespace "olm" ...

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:343: "olm-operator-859c88c96-vsdlp" [70ea73ed-8703-41ef-8b44-4383788352e1] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:476: (dbg) TestAddons/parallel/Olm: app=olm-operator healthy within 5.014034785s
addons_test.go:479: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "app=packageserver" in namespace "olm" ...
helpers_test.go:343: "packageserver-6488c6c757-lmf89" [3f0a6c54-7925-430d-ab5f-a01d61a5edcb] Running
helpers_test.go:343: "packageserver-6488c6c757-ncn95" [d1fe5f4e-801b-43bb-8d91-f98278b9b7e0] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:343: "packageserver-6488c6c757-lmf89" [3f0a6c54-7925-430d-ab5f-a01d61a5edcb] Running
helpers_test.go:343: "packageserver-6488c6c757-ncn95" [d1fe5f4e-801b-43bb-8d91-f98278b9b7e0] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:343: "packageserver-6488c6c757-lmf89" [3f0a6c54-7925-430d-ab5f-a01d61a5edcb] Running
helpers_test.go:343: "packageserver-6488c6c757-ncn95" [d1fe5f4e-801b-43bb-8d91-f98278b9b7e0] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:343: "packageserver-6488c6c757-lmf89" [3f0a6c54-7925-430d-ab5f-a01d61a5edcb] Running
helpers_test.go:343: "packageserver-6488c6c757-ncn95" [d1fe5f4e-801b-43bb-8d91-f98278b9b7e0] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:343: "packageserver-6488c6c757-lmf89" [3f0a6c54-7925-430d-ab5f-a01d61a5edcb] Running
helpers_test.go:343: "packageserver-6488c6c757-ncn95" [d1fe5f4e-801b-43bb-8d91-f98278b9b7e0] Running
2021/08/12 23:54:04 [DEBUG] GET http://192.168.39.112:5000

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:343: "packageserver-6488c6c757-lmf89" [3f0a6c54-7925-430d-ab5f-a01d61a5edcb] Running
addons_test.go:479: (dbg) TestAddons/parallel/Olm: app=packageserver healthy within 5.015134169s
addons_test.go:482: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "olm.catalogSource=operatorhubio-catalog" in namespace "olm" ...
helpers_test.go:343: "operatorhubio-catalog-lsjpz" [2088d6f1-a488-4bbd-be2e-5a3f6482ffd3] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:482: (dbg) TestAddons/parallel/Olm: olm.catalogSource=operatorhubio-catalog healthy within 5.009589576s
addons_test.go:487: (dbg) Run:  kubectl --context addons-20210812235029-820289 create -f testdata/etcd.yaml
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210812235029-820289 get csv -n my-etcd
addons_test.go:499: kubectl --context addons-20210812235029-820289 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210812235029-820289 get csv -n my-etcd
addons_test.go:499: kubectl --context addons-20210812235029-820289 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210812235029-820289 get csv -n my-etcd
addons_test.go:499: kubectl --context addons-20210812235029-820289 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210812235029-820289 get csv -n my-etcd
addons_test.go:499: kubectl --context addons-20210812235029-820289 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210812235029-820289 get csv -n my-etcd
addons_test.go:499: kubectl --context addons-20210812235029-820289 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210812235029-820289 get csv -n my-etcd

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210812235029-820289 get csv -n my-etcd
--- PASS: TestAddons/parallel/Olm (72.58s)

                                                
                                    
x
+
TestAddons/parallel/CSI (69.2s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:526: csi-hostpath-driver pods stabilized in 17.856723ms
addons_test.go:529: (dbg) Run:  kubectl --context addons-20210812235029-820289 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:534: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20210812235029-820289 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20210812235029-820289 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:539: (dbg) Run:  kubectl --context addons-20210812235029-820289 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:544: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:343: "task-pv-pod" [6b6d2331-f2a7-4dee-9354-836dd5e97dee] Pending

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod" [6b6d2331-f2a7-4dee-9354-836dd5e97dee] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod" [6b6d2331-f2a7-4dee-9354-836dd5e97dee] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:544: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 35.019528914s
addons_test.go:549: (dbg) Run:  kubectl --context addons-20210812235029-820289 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:554: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:418: (dbg) Run:  kubectl --context addons-20210812235029-820289 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:426: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:418: (dbg) Run:  kubectl --context addons-20210812235029-820289 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:559: (dbg) Run:  kubectl --context addons-20210812235029-820289 delete pod task-pv-pod

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:559: (dbg) Done: kubectl --context addons-20210812235029-820289 delete pod task-pv-pod: (8.599068715s)
addons_test.go:565: (dbg) Run:  kubectl --context addons-20210812235029-820289 delete pvc hpvc
addons_test.go:571: (dbg) Run:  kubectl --context addons-20210812235029-820289 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:576: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20210812235029-820289 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20210812235029-820289 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:581: (dbg) Run:  kubectl --context addons-20210812235029-820289 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:586: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:343: "task-pv-pod-restore" [7ca88a61-837b-477a-a63c-4d16ba97af0f] Pending
helpers_test.go:343: "task-pv-pod-restore" [7ca88a61-837b-477a-a63c-4d16ba97af0f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod-restore" [7ca88a61-837b-477a-a63c-4d16ba97af0f] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:586: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 10.025510272s
addons_test.go:591: (dbg) Run:  kubectl --context addons-20210812235029-820289 delete pod task-pv-pod-restore
addons_test.go:591: (dbg) Done: kubectl --context addons-20210812235029-820289 delete pod task-pv-pod-restore: (2.404055914s)
addons_test.go:595: (dbg) Run:  kubectl --context addons-20210812235029-820289 delete pvc hpvc-restore
addons_test.go:599: (dbg) Run:  kubectl --context addons-20210812235029-820289 delete volumesnapshot new-snapshot-demo
addons_test.go:603: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210812235029-820289 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:603: (dbg) Done: out/minikube-linux-amd64 -p addons-20210812235029-820289 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.144850442s)
addons_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210812235029-820289 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p addons-20210812235029-820289 addons disable volumesnapshots --alsologtostderr -v=1: (1.000980542s)
--- PASS: TestAddons/parallel/CSI (69.20s)

                                                
                                    
x
+
TestAddons/parallel/GCPAuth (70.41s)

                                                
                                                
=== RUN   TestAddons/parallel/GCPAuth
=== PAUSE TestAddons/parallel/GCPAuth

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:618: (dbg) Run:  kubectl --context addons-20210812235029-820289 create -f testdata/busybox.yaml

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:624: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [32b4ebc6-1950-4af1-9d15-bd7d8ddbe52c] Pending
helpers_test.go:343: "busybox" [32b4ebc6-1950-4af1-9d15-bd7d8ddbe52c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [32b4ebc6-1950-4af1-9d15-bd7d8ddbe52c] Running

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:624: (dbg) TestAddons/parallel/GCPAuth: integration-test=busybox healthy within 10.018099251s
addons_test.go:630: (dbg) Run:  kubectl --context addons-20210812235029-820289 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:667: (dbg) Run:  kubectl --context addons-20210812235029-820289 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:683: (dbg) Run:  kubectl --context addons-20210812235029-820289 apply -f testdata/private-image.yaml
addons_test.go:690: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image" in namespace "default" ...

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:343: "private-image-7ff9c8c74f-vrcqj" [e4c8cd4c-3a3d-4c12-8863-a7209f060be2] Pending / Ready:ContainersNotReady (containers with unready status: [private-image]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image])

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:343: "private-image-7ff9c8c74f-vrcqj" [e4c8cd4c-3a3d-4c12-8863-a7209f060be2] Running
addons_test.go:690: (dbg) TestAddons/parallel/GCPAuth: integration-test=private-image healthy within 26.045785574s
addons_test.go:696: (dbg) Run:  kubectl --context addons-20210812235029-820289 apply -f testdata/private-image-eu.yaml
addons_test.go:703: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image-eu" in namespace "default" ...
helpers_test.go:343: "private-image-eu-5956d58f9f-qhzdr" [abed0c04-d2a3-4c50-899b-16a1db4432a3] Pending / Ready:ContainersNotReady (containers with unready status: [private-image-eu]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image-eu])

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:343: "private-image-eu-5956d58f9f-qhzdr" [abed0c04-d2a3-4c50-899b-16a1db4432a3] Running

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:703: (dbg) TestAddons/parallel/GCPAuth: integration-test=private-image-eu healthy within 26.054530126s
addons_test.go:709: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210812235029-820289 addons disable gcp-auth --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:709: (dbg) Done: out/minikube-linux-amd64 -p addons-20210812235029-820289 addons disable gcp-auth --alsologtostderr -v=1: (6.524339076s)
--- PASS: TestAddons/parallel/GCPAuth (70.41s)

                                                
                                    
x
+
TestCertOptions (83.88s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-20210813002217-820289 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
E0813 00:23:04.581740  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812235933-820289/client.crt: no such file or directory

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-20210813002217-820289 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m22.175821693s)
cert_options_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-20210813002217-820289 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:73: (dbg) Run:  kubectl --context cert-options-20210813002217-820289 config view
helpers_test.go:176: Cleaning up "cert-options-20210813002217-820289" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-20210813002217-820289
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-20210813002217-820289: (1.399624608s)
--- PASS: TestCertOptions (83.88s)

                                                
                                    
x
+
TestForceSystemdFlag (77.45s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-20210813002211-820289 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-20210813002211-820289 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m16.277279777s)
helpers_test.go:176: Cleaning up "force-systemd-flag-20210813002211-820289" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-20210813002211-820289
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-20210813002211-820289: (1.168925494s)
--- PASS: TestForceSystemdFlag (77.45s)

                                                
                                    
x
+
TestForceSystemdEnv (95.25s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-20210813002036-820289 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-20210813002036-820289 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m33.607010376s)
helpers_test.go:176: Cleaning up "force-systemd-env-20210813002036-820289" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-20210813002036-820289
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-20210813002036-820289: (1.639515556s)
--- PASS: TestForceSystemdEnv (95.25s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (6.31s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (6.31s)

                                                
                                    
x
+
TestErrorSpam/setup (53.4s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:78: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210812235827-820289 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20210812235827-820289 --driver=kvm2  --container-runtime=crio
E0812 23:58:50.748400  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235029-820289/client.crt: no such file or directory
E0812 23:58:50.753999  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235029-820289/client.crt: no such file or directory
E0812 23:58:50.764228  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235029-820289/client.crt: no such file or directory
E0812 23:58:50.784523  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235029-820289/client.crt: no such file or directory
E0812 23:58:50.824804  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235029-820289/client.crt: no such file or directory
E0812 23:58:50.904987  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235029-820289/client.crt: no such file or directory
E0812 23:58:51.065516  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235029-820289/client.crt: no such file or directory
E0812 23:58:51.386101  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235029-820289/client.crt: no such file or directory
E0812 23:58:52.027053  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235029-820289/client.crt: no such file or directory
E0812 23:58:53.307275  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235029-820289/client.crt: no such file or directory
E0812 23:58:55.868178  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235029-820289/client.crt: no such file or directory
E0812 23:59:00.988740  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235029-820289/client.crt: no such file or directory
E0812 23:59:11.229881  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235029-820289/client.crt: no such file or directory
error_spam_test.go:78: (dbg) Done: out/minikube-linux-amd64 start -p nospam-20210812235827-820289 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20210812235827-820289 --driver=kvm2  --container-runtime=crio: (53.401894262s)
--- PASS: TestErrorSpam/setup (53.40s)

                                                
                                    
x
+
TestErrorSpam/start (0.42s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:213: Cleaning up 1 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210812235827-820289 --log_dir /tmp/nospam-20210812235827-820289 start --dry-run
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210812235827-820289 --log_dir /tmp/nospam-20210812235827-820289 start --dry-run
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210812235827-820289 --log_dir /tmp/nospam-20210812235827-820289 start --dry-run
--- PASS: TestErrorSpam/start (0.42s)

                                                
                                    
x
+
TestErrorSpam/status (0.73s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210812235827-820289 --log_dir /tmp/nospam-20210812235827-820289 status
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210812235827-820289 --log_dir /tmp/nospam-20210812235827-820289 status
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210812235827-820289 --log_dir /tmp/nospam-20210812235827-820289 status
--- PASS: TestErrorSpam/status (0.73s)

                                                
                                    
x
+
TestErrorSpam/pause (3.43s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210812235827-820289 --log_dir /tmp/nospam-20210812235827-820289 pause
error_spam_test.go:156: (dbg) Done: out/minikube-linux-amd64 -p nospam-20210812235827-820289 --log_dir /tmp/nospam-20210812235827-820289 pause: (2.449301287s)
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210812235827-820289 --log_dir /tmp/nospam-20210812235827-820289 pause
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210812235827-820289 --log_dir /tmp/nospam-20210812235827-820289 pause
--- PASS: TestErrorSpam/pause (3.43s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.82s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210812235827-820289 --log_dir /tmp/nospam-20210812235827-820289 unpause
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210812235827-820289 --log_dir /tmp/nospam-20210812235827-820289 unpause
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210812235827-820289 --log_dir /tmp/nospam-20210812235827-820289 unpause
--- PASS: TestErrorSpam/unpause (1.82s)

                                                
                                    
x
+
TestErrorSpam/stop (5.26s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210812235827-820289 --log_dir /tmp/nospam-20210812235827-820289 stop
E0812 23:59:31.710443  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235029-820289/client.crt: no such file or directory
error_spam_test.go:156: (dbg) Done: out/minikube-linux-amd64 -p nospam-20210812235827-820289 --log_dir /tmp/nospam-20210812235827-820289 stop: (5.117654411s)
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210812235827-820289 --log_dir /tmp/nospam-20210812235827-820289 stop
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210812235827-820289 --log_dir /tmp/nospam-20210812235827-820289 stop
--- PASS: TestErrorSpam/stop (5.26s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1606: local sync path: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/test/nested/copy/820289/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (77.65s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:1982: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210812235933-820289 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0813 00:00:12.671985  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235029-820289/client.crt: no such file or directory
functional_test.go:1982: (dbg) Done: out/minikube-linux-amd64 start -p functional-20210812235933-820289 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m17.648985292s)
--- PASS: TestFunctional/serial/StartWithProxy (77.65s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.02s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:627: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210812235933-820289 --alsologtostderr -v=8
functional_test.go:627: (dbg) Done: out/minikube-linux-amd64 start -p functional-20210812235933-820289 --alsologtostderr -v=8: (6.015636465s)
functional_test.go:631: soft start took 6.016248618s for "functional-20210812235933-820289" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.02s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:647: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:660: (dbg) Run:  kubectl --context functional-20210812235933-820289 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.37s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:982: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 cache add k8s.gcr.io/pause:3.1
functional_test.go:982: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 cache add k8s.gcr.io/pause:3.3
functional_test.go:982: (dbg) Done: out/minikube-linux-amd64 -p functional-20210812235933-820289 cache add k8s.gcr.io/pause:3.3: (1.232050808s)
functional_test.go:982: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 cache add k8s.gcr.io/pause:latest
functional_test.go:982: (dbg) Done: out/minikube-linux-amd64 -p functional-20210812235933-820289 cache add k8s.gcr.io/pause:latest: (1.259312986s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.37s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.61s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1012: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20210812235933-820289 /tmp/functional-20210812235933-820289835113346
functional_test.go:1024: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 cache add minikube-local-cache-test:functional-20210812235933-820289
functional_test.go:1024: (dbg) Done: out/minikube-linux-amd64 -p functional-20210812235933-820289 cache add minikube-local-cache-test:functional-20210812235933-820289: (1.316806522s)
functional_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 cache delete minikube-local-cache-test:functional-20210812235933-820289
functional_test.go:1018: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20210812235933-820289
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.61s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1036: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1043: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1056: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.97s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1078: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 ssh sudo crictl rmi k8s.gcr.io/pause:latest
functional_test.go:1084: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1084: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210812235933-820289 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (222.142103ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 cache reload
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-20210812235933-820289 cache reload: (1.274501105s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.97s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1103: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1103: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:678: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 kubectl -- --context functional-20210812235933-820289 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:701: (dbg) Run:  out/kubectl --context functional-20210812235933-820289 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.28s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:715: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210812235933-820289 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0813 00:01:34.592609  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235029-820289/client.crt: no such file or directory
functional_test.go:715: (dbg) Done: out/minikube-linux-amd64 start -p functional-20210812235933-820289 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.279617964s)
functional_test.go:719: restart took 33.279731762s for "functional-20210812235933-820289" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (33.28s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:766: (dbg) Run:  kubectl --context functional-20210812235933-820289 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:780: etcd phase: Running
functional_test.go:790: etcd status: Ready
functional_test.go:780: kube-apiserver phase: Running
functional_test.go:790: kube-apiserver status: Ready
functional_test.go:780: kube-controller-manager phase: Running
functional_test.go:790: kube-controller-manager status: Ready
functional_test.go:780: kube-scheduler phase: Running
functional_test.go:790: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1165: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 logs
functional_test.go:1165: (dbg) Done: out/minikube-linux-amd64 -p functional-20210812235933-820289 logs: (1.400694514s)
--- PASS: TestFunctional/serial/LogsCmd (1.40s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1181: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 logs --file /tmp/functional-20210812235933-820289264838393/logs.txt
functional_test.go:1181: (dbg) Done: out/minikube-linux-amd64 -p functional-20210812235933-820289 logs --file /tmp/functional-20210812235933-820289264838393/logs.txt: (1.403080283s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210812235933-820289 config get cpus: exit status 14 (71.939621ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 config set cpus 2
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 config get cpus
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 config unset cpus
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 config get cpus
functional_test.go:1129: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210812235933-820289 config get cpus: exit status 14 (53.212745ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (6.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:857: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20210812235933-820289 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:862: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20210812235933-820289 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to kill pid 825365: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (6.09s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:919: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210812235933-820289 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:919: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20210812235933-820289 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (162.252893ms)

                                                
                                                
-- stdout --
	* [functional-20210812235933-820289] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	  - MINIKUBE_LOCATION=12230
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 00:01:51.931382  824624 out.go:298] Setting OutFile to fd 1 ...
	I0813 00:01:51.931453  824624 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 00:01:51.931457  824624 out.go:311] Setting ErrFile to fd 2...
	I0813 00:01:51.931460  824624 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 00:01:51.931554  824624 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	I0813 00:01:51.931802  824624 out.go:305] Setting JSON to false
	I0813 00:01:51.967517  824624 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":13475,"bootTime":1628799437,"procs":187,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 00:01:51.967597  824624 start.go:121] virtualization: kvm guest
	I0813 00:01:51.970384  824624 out.go:177] * [functional-20210812235933-820289] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 00:01:51.972124  824624 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0813 00:01:51.973801  824624 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 00:01:51.975372  824624 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	I0813 00:01:51.976882  824624 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 00:01:51.977659  824624 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 00:01:51.977723  824624 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 00:01:51.988440  824624 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:39803
	I0813 00:01:51.988938  824624 main.go:130] libmachine: () Calling .GetVersion
	I0813 00:01:51.989426  824624 main.go:130] libmachine: Using API Version  1
	I0813 00:01:51.989450  824624 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 00:01:51.989863  824624 main.go:130] libmachine: () Calling .GetMachineName
	I0813 00:01:51.990082  824624 main.go:130] libmachine: (functional-20210812235933-820289) Calling .DriverName
	I0813 00:01:51.990267  824624 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 00:01:51.990706  824624 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 00:01:51.990750  824624 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 00:01:52.001112  824624 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:39309
	I0813 00:01:52.001547  824624 main.go:130] libmachine: () Calling .GetVersion
	I0813 00:01:52.002035  824624 main.go:130] libmachine: Using API Version  1
	I0813 00:01:52.002062  824624 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 00:01:52.002462  824624 main.go:130] libmachine: () Calling .GetMachineName
	I0813 00:01:52.002664  824624 main.go:130] libmachine: (functional-20210812235933-820289) Calling .DriverName
	I0813 00:01:52.031906  824624 out.go:177] * Using the kvm2 driver based on existing profile
	I0813 00:01:52.031931  824624 start.go:278] selected driver: kvm2
	I0813 00:01:52.031936  824624 start.go:751] validating driver "kvm2" against &{Name:functional-20210812235933-820289 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12122/minikube-v1.22.0-1628238775-12122.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 Clu
sterName:functional-20210812235933-820289 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.20 Port:8441 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false r
egistry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 00:01:52.032056  824624 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0813 00:01:52.034746  824624 out.go:177] 
	W0813 00:01:52.034855  824624 out.go:242] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0813 00:01:52.036301  824624 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:934: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210812235933-820289 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:956: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210812235933-820289 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:956: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20210812235933-820289 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (164.591081ms)

                                                
                                                
-- stdout --
	* [functional-20210812235933-820289] minikube v1.22.0 sur Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	  - MINIKUBE_LOCATION=12230
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 00:01:52.258611  824684 out.go:298] Setting OutFile to fd 1 ...
	I0813 00:01:52.258682  824684 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 00:01:52.258686  824684 out.go:311] Setting ErrFile to fd 2...
	I0813 00:01:52.258689  824684 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 00:01:52.258830  824684 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	I0813 00:01:52.259051  824684 out.go:305] Setting JSON to false
	I0813 00:01:52.295098  824684 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":13475,"bootTime":1628799437,"procs":188,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 00:01:52.295217  824684 start.go:121] virtualization: kvm guest
	I0813 00:01:52.297519  824684 out.go:177] * [functional-20210812235933-820289] minikube v1.22.0 sur Debian 9.13 (kvm/amd64)
	I0813 00:01:52.299208  824684 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0813 00:01:52.300801  824684 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 00:01:52.302345  824684 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	I0813 00:01:52.303813  824684 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 00:01:52.304649  824684 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 00:01:52.304736  824684 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 00:01:52.316430  824684 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35047
	I0813 00:01:52.316902  824684 main.go:130] libmachine: () Calling .GetVersion
	I0813 00:01:52.317488  824684 main.go:130] libmachine: Using API Version  1
	I0813 00:01:52.317513  824684 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 00:01:52.317934  824684 main.go:130] libmachine: () Calling .GetMachineName
	I0813 00:01:52.318122  824684 main.go:130] libmachine: (functional-20210812235933-820289) Calling .DriverName
	I0813 00:01:52.318307  824684 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 00:01:52.318602  824684 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 00:01:52.318636  824684 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 00:01:52.329568  824684 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:46835
	I0813 00:01:52.329998  824684 main.go:130] libmachine: () Calling .GetVersion
	I0813 00:01:52.330580  824684 main.go:130] libmachine: Using API Version  1
	I0813 00:01:52.330602  824684 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 00:01:52.331009  824684 main.go:130] libmachine: () Calling .GetMachineName
	I0813 00:01:52.331221  824684 main.go:130] libmachine: (functional-20210812235933-820289) Calling .DriverName
	I0813 00:01:52.361007  824684 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0813 00:01:52.361035  824684 start.go:278] selected driver: kvm2
	I0813 00:01:52.361042  824684 start.go:751] validating driver "kvm2" against &{Name:functional-20210812235933-820289 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12122/minikube-v1.22.0-1628238775-12122.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 Clu
sterName:functional-20210812235933-820289 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.20 Port:8441 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false r
egistry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 00:01:52.361205  824684 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0813 00:01:52.363909  824684 out.go:177] 
	W0813 00:01:52.364028  824684 out.go:242] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0813 00:01:52.365603  824684 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:809: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:815: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:826: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (16.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1357: (dbg) Run:  kubectl --context functional-20210812235933-820289 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1363: (dbg) Run:  kubectl --context functional-20210812235933-820289 expose deployment hello-node --type=NodePort --port=8080

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1368: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:343: "hello-node-6cbfcd7cbc-hdx79" [91839222-6faa-4aa4-ba58-3dcf58bdb7dd] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:343: "hello-node-6cbfcd7cbc-hdx79" [91839222-6faa-4aa4-ba58-3dcf58bdb7dd] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1368: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 15.044490315s
functional_test.go:1372: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1385: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 service --namespace=default --https --url hello-node

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1394: found endpoint: https://192.168.39.20:31012
functional_test.go:1405: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 service hello-node --url --format={{.IP}}

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1414: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 service hello-node --url
functional_test.go:1420: found endpoint for hello-node: http://192.168.39.20:31012
functional_test.go:1431: Attempting to fetch http://192.168.39.20:31012 ...
functional_test.go:1450: http://192.168.39.20:31012: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-6cbfcd7cbc-hdx79

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.20:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.20:31012
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmd (16.82s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1465: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 addons list
functional_test.go:1476: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (47.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:343: "storage-provisioner" [2d6ea71e-7ccc-4240-bab5-9f0b35add343] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.013434231s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-20210812235933-820289 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-20210812235933-820289 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20210812235933-820289 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20210812235933-820289 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20210812235933-820289 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:343: "sp-pod" [45ac31c2-68ba-4db5-a949-11a0322df66b] Pending
2021/08/13 00:02:03 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:343: "sp-pod" [45ac31c2-68ba-4db5-a949-11a0322df66b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:343: "sp-pod" [45ac31c2-68ba-4db5-a949-11a0322df66b] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 29.015643203s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-20210812235933-820289 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-20210812235933-820289 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-20210812235933-820289 delete -f testdata/storage-provisioner/pod.yaml: (1.899423323s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20210812235933-820289 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:343: "sp-pod" [1273b179-e857-463e-a1a0-5ff75cc0486a] Pending
helpers_test.go:343: "sp-pod" [1273b179-e857-463e-a1a0-5ff75cc0486a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:343: "sp-pod" [1273b179-e857-463e-a1a0-5ff75cc0486a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.027791714s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-20210812235933-820289 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (47.06s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1498: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 ssh "echo hello"
functional_test.go:1515: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:549: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 ssh "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (34.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1546: (dbg) Run:  kubectl --context functional-20210812235933-820289 replace --force -f testdata/mysql.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1551: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:343: "mysql-9bbbc5bbb-hgcnk" [fc7081d0-1845-47a6-9806-c09f26d59002] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:343: "mysql-9bbbc5bbb-hgcnk" [fc7081d0-1845-47a6-9806-c09f26d59002] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1551: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 28.66355456s
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20210812235933-820289 exec mysql-9bbbc5bbb-hgcnk -- mysql -ppassword -e "show databases;"
functional_test.go:1558: (dbg) Non-zero exit: kubectl --context functional-20210812235933-820289 exec mysql-9bbbc5bbb-hgcnk -- mysql -ppassword -e "show databases;": exit status 1 (406.223473ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20210812235933-820289 exec mysql-9bbbc5bbb-hgcnk -- mysql -ppassword -e "show databases;"
functional_test.go:1558: (dbg) Non-zero exit: kubectl --context functional-20210812235933-820289 exec mysql-9bbbc5bbb-hgcnk -- mysql -ppassword -e "show databases;": exit status 1 (628.498046ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20210812235933-820289 exec mysql-9bbbc5bbb-hgcnk -- mysql -ppassword -e "show databases;"
functional_test.go:1558: (dbg) Non-zero exit: kubectl --context functional-20210812235933-820289 exec mysql-9bbbc5bbb-hgcnk -- mysql -ppassword -e "show databases;": exit status 1 (196.401527ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20210812235933-820289 exec mysql-9bbbc5bbb-hgcnk -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (34.54s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1678: Checking for existence of /etc/test/nested/copy/820289/hosts within VM
functional_test.go:1679: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 ssh "sudo cat /etc/test/nested/copy/820289/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1684: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1719: Checking for existence of /etc/ssl/certs/820289.pem within VM
functional_test.go:1720: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 ssh "sudo cat /etc/ssl/certs/820289.pem"
functional_test.go:1719: Checking for existence of /usr/share/ca-certificates/820289.pem within VM
functional_test.go:1720: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 ssh "sudo cat /usr/share/ca-certificates/820289.pem"
functional_test.go:1719: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1720: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1746: Checking for existence of /etc/ssl/certs/8202892.pem within VM
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 ssh "sudo cat /etc/ssl/certs/8202892.pem"
functional_test.go:1746: Checking for existence of /usr/share/ca-certificates/8202892.pem within VM
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 ssh "sudo cat /usr/share/ca-certificates/8202892.pem"
functional_test.go:1746: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:216: (dbg) Run:  kubectl --context functional-20210812235933-820289 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/LoadImage (2.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/LoadImage
=== PAUSE TestFunctional/parallel/LoadImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:239: (dbg) Run:  docker pull busybox:1.33

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:246: (dbg) Run:  docker tag busybox:1.33 docker.io/library/busybox:load-functional-20210812235933-820289
functional_test.go:252: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 image load docker.io/library/busybox:load-functional-20210812235933-820289

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:252: (dbg) Done: out/minikube-linux-amd64 -p functional-20210812235933-820289 image load docker.io/library/busybox:load-functional-20210812235933-820289: (1.722764249s)
functional_test.go:373: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210812235933-820289 -- sudo crictl inspecti docker.io/library/busybox:load-functional-20210812235933-820289
--- PASS: TestFunctional/parallel/LoadImage (2.82s)

                                                
                                    
x
+
TestFunctional/parallel/RemoveImage (2.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/RemoveImage
=== PAUSE TestFunctional/parallel/RemoveImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:331: (dbg) Run:  docker pull busybox:1.32
functional_test.go:338: (dbg) Run:  docker tag busybox:1.32 docker.io/library/busybox:remove-functional-20210812235933-820289
functional_test.go:344: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 image load docker.io/library/busybox:remove-functional-20210812235933-820289

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:344: (dbg) Done: out/minikube-linux-amd64 -p functional-20210812235933-820289 image load docker.io/library/busybox:remove-functional-20210812235933-820289: (1.452713971s)
functional_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 image rm docker.io/library/busybox:remove-functional-20210812235933-820289

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:387: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210812235933-820289 -- sudo crictl images
--- PASS: TestFunctional/parallel/RemoveImage (2.97s)

                                                
                                    
x
+
TestFunctional/parallel/LoadImageFromFile (5.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/LoadImageFromFile
=== PAUSE TestFunctional/parallel/LoadImageFromFile

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImageFromFile
functional_test.go:279: (dbg) Run:  docker pull busybox:1.31
functional_test.go:286: (dbg) Run:  docker tag busybox:1.31 docker.io/library/busybox:load-from-file-functional-20210812235933-820289
functional_test.go:293: (dbg) Run:  docker save -o busybox.tar docker.io/library/busybox:load-from-file-functional-20210812235933-820289
functional_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 image load /home/jenkins/workspace/KVM_Linux_crio_integration/busybox.tar
functional_test.go:304: (dbg) Done: out/minikube-linux-amd64 -p functional-20210812235933-820289 image load /home/jenkins/workspace/KVM_Linux_crio_integration/busybox.tar: (4.319492397s)
functional_test.go:387: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210812235933-820289 -- sudo crictl images
--- PASS: TestFunctional/parallel/LoadImageFromFile (5.55s)

                                                
                                    
x
+
TestFunctional/parallel/BuildImage (8.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/BuildImage
=== PAUSE TestFunctional/parallel/BuildImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 image build -t localhost/my-image:functional-20210812235933-820289 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage
functional_test.go:407: (dbg) Done: out/minikube-linux-amd64 -p functional-20210812235933-820289 image build -t localhost/my-image:functional-20210812235933-820289 testdata/build: (8.100666946s)
functional_test.go:412: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20210812235933-820289 image build -t localhost/my-image:functional-20210812235933-820289 testdata/build:
STEP 1: FROM busybox
STEP 2: RUN true
--> 9a32dcf8f4a
STEP 3: ADD content.txt /
STEP 4: COMMIT localhost/my-image:functional-20210812235933-820289
--> b7869318935
b7869318935fa15c41f92bce6281b286c5f423b84f71e2a61f847e481e6471bb
functional_test.go:415: (dbg) Stderr: out/minikube-linux-amd64 -p functional-20210812235933-820289 image build -t localhost/my-image:functional-20210812235933-820289 testdata/build:
Completed short name "busybox" with unqualified-search registries (origin: /etc/containers/registries.conf)
Getting image source signatures
Copying blob sha256:b71f96345d44b237decc0c2d6c2f9ad0d17fde83dad7579608f1f0764d9686f2
Copying config sha256:69593048aa3acfee0f75f20b77acb549de2472063053f6730c4091b53f2dfb02
Writing manifest to image destination
Storing signatures
functional_test.go:373: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210812235933-820289 -- sudo crictl inspecti localhost/my-image:functional-20210812235933-820289
--- PASS: TestFunctional/parallel/BuildImage (8.35s)

                                                
                                    
x
+
TestFunctional/parallel/ListImages (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ListImages
=== PAUSE TestFunctional/parallel/ListImages

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ListImages
functional_test.go:441: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ListImages
functional_test.go:446: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20210812235933-820289 image ls:
localhost/minikube-local-cache-test:functional-20210812235933-820289
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.4.1
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.2
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.21.3
k8s.gcr.io/kube-proxy:v1.21.3
k8s.gcr.io/kube-controller-manager:v1.21.3
k8s.gcr.io/kube-apiserver:v1.21.3
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns/coredns:v1.8.0
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/kubernetesui/metrics-scraper:v1.0.4
docker.io/kubernetesui/dashboard:v2.1.0
docker.io/kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestFunctional/parallel/ListImages (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1774: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 ssh "sudo systemctl is-active docker"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1774: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210812235933-820289 ssh "sudo systemctl is-active docker": exit status 1 (213.016526ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:1774: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 ssh "sudo systemctl is-active containerd"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1774: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210812235933-820289 ssh "sudo systemctl is-active containerd": exit status 1 (204.358121ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2003: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2016: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1202: (dbg) Run:  out/minikube-linux-amd64 profile lis

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1206: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (13.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:76: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20210812235933-820289 /tmp/mounttest826219012:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:110: wrote "test-1628812901532042643" to /tmp/mounttest826219012/created-by-test
functional_test_mount_test.go:110: wrote "test-1628812901532042643" to /tmp/mounttest826219012/created-by-test-removed-by-pod
functional_test_mount_test.go:110: wrote "test-1628812901532042643" to /tmp/mounttest826219012/test-1628812901532042643
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:118: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210812235933-820289 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (320.138891ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:136: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 13 00:01 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 13 00:01 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 13 00:01 test-1628812901532042643
functional_test_mount_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 ssh cat /mount-9p/test-1628812901532042643

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:151: (dbg) Run:  kubectl --context functional-20210812235933-820289 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:156: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:343: "busybox-mount" [f82c9dbe-50cb-44e5-b222-4277c8ad8c4b] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:343: "busybox-mount" [f82c9dbe-50cb-44e5-b222-4277c8ad8c4b] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:343: "busybox-mount" [f82c9dbe-50cb-44e5-b222-4277c8ad8c4b] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:156: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 11.007932703s
functional_test_mount_test.go:172: (dbg) Run:  kubectl --context functional-20210812235933-820289 logs busybox-mount
functional_test_mount_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:93: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:97: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20210812235933-820289 /tmp/mounttest826219012:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (13.92s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1240: (dbg) Run:  out/minikube-linux-amd64 profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1245: Took "249.864916ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1254: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1259: Took "58.383056ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1295: Took "252.300135ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1303: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1308: Took "60.498116ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:225: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20210812235933-820289 /tmp/mounttest636273043:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210812235933-820289 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (219.072181ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:269: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 ssh -- ls -la /mount-9p
functional_test_mount_test.go:273: guest mount directory contents
total 0
functional_test_mount_test.go:275: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20210812235933-820289 /tmp/mounttest636273043:/mount-9p --alsologtostderr -v=1 --port 46464] ...

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:276: reading mount text
functional_test_mount_test.go:290: done reading mount text
functional_test_mount_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 ssh "sudo umount -f /mount-9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:242: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210812235933-820289 ssh "sudo umount -f /mount-9p": exit status 1 (227.154638ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:244: "out/minikube-linux-amd64 -p functional-20210812235933-820289 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:246: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20210812235933-820289 /tmp/mounttest636273043:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:126: (dbg) daemon: [out/minikube-linux-amd64 -p functional-20210812235933-820289 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:1865: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:1865: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:1865: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210812235933-820289 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:164: (dbg) Run:  kubectl --context functional-20210812235933-820289 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:229: tunnel at http://10.103.106.205 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:364: (dbg) stopping [out/minikube-linux-amd64 -p functional-20210812235933-820289 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_busybox_image (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_busybox_image
functional_test.go:183: (dbg) Run:  docker rmi -f docker.io/library/busybox:load-functional-20210812235933-820289
functional_test.go:188: (dbg) Run:  docker rmi -f docker.io/library/busybox:remove-functional-20210812235933-820289
--- PASS: TestFunctional/delete_busybox_image (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:195: (dbg) Run:  docker rmi -f localhost/my-image:functional-20210812235933-820289
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:203: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20210812235933-820289
--- PASS: TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.32s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:146: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-20210813000359-820289 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:146: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-20210813000359-820289 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (91.637172ms)

                                                
                                                
-- stdout --
	{"data":{"currentstep":"0","message":"[json-output-error-20210813000359-820289] minikube v1.22.0 on Debian 9.13 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"},"datacontenttype":"application/json","id":"5b7b4474-c65b-45f5-8b89-89df4aecd942","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig"},"datacontenttype":"application/json","id":"8fba644d-ea10-4380-b9d5-b287bc993d21","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"},"datacontenttype":"application/json","id":"633c6dc1-1086-4273-8645-27795fcc6f18","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube"},"datacontenttype":"application/json","id":"48a402bd-2881-4f66-9692-cb8e7fae1e50","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_LOCATION=12230"},"datacontenttype":"application/json","id":"5b7175c2-991c-4b82-9257-14e95f9ca89b","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""},"datacontenttype":"application/json","id":"bb0719a5-7907-4be5-bc09-a12eda88e26a","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.error"}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-20210813000359-820289" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-20210813000359-820289
--- PASS: TestErrorJSONOutput (0.32s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (119.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210813000359-820289 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0813 00:04:18.433091  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235029-820289/client.crt: no such file or directory
multinode_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210813000359-820289 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m58.774962785s)
multinode_test.go:87: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813000359-820289 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (119.18s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (52.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:106: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20210813000359-820289 -v 3 --alsologtostderr
E0813 00:06:46.654913  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812235933-820289/client.crt: no such file or directory
E0813 00:06:51.775436  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812235933-820289/client.crt: no such file or directory
E0813 00:07:02.016446  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812235933-820289/client.crt: no such file or directory
E0813 00:07:22.497213  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812235933-820289/client.crt: no such file or directory
multinode_test.go:106: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-20210813000359-820289 -v 3 --alsologtostderr: (52.347759438s)
multinode_test.go:112: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813000359-820289 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (52.91s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:128: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (1.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:169: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813000359-820289 status --output json --alsologtostderr
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813000359-820289 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:549: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813000359-820289 ssh "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813000359-820289 cp testdata/cp-test.txt multinode-20210813000359-820289-m02:/home/docker/cp-test.txt
helpers_test.go:549: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813000359-820289 ssh -n multinode-20210813000359-820289-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813000359-820289 cp testdata/cp-test.txt multinode-20210813000359-820289-m03:/home/docker/cp-test.txt
helpers_test.go:549: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813000359-820289 ssh -n multinode-20210813000359-820289-m03 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestMultiNode/serial/CopyFile (1.78s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:191: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813000359-820289 node stop m03
multinode_test.go:191: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210813000359-820289 node stop m03: (2.088088902s)
multinode_test.go:197: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813000359-820289 status
multinode_test.go:197: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210813000359-820289 status: exit status 7 (418.621726ms)

                                                
                                                
-- stdout --
	multinode-20210813000359-820289
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20210813000359-820289-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20210813000359-820289-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:204: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813000359-820289 status --alsologtostderr
multinode_test.go:204: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210813000359-820289 status --alsologtostderr: exit status 7 (420.355367ms)

                                                
                                                
-- stdout --
	multinode-20210813000359-820289
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20210813000359-820289-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20210813000359-820289-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 00:07:42.920899  828238 out.go:298] Setting OutFile to fd 1 ...
	I0813 00:07:42.920973  828238 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 00:07:42.920978  828238 out.go:311] Setting ErrFile to fd 2...
	I0813 00:07:42.920981  828238 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 00:07:42.921072  828238 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	I0813 00:07:42.921228  828238 out.go:305] Setting JSON to false
	I0813 00:07:42.921249  828238 mustload.go:65] Loading cluster: multinode-20210813000359-820289
	I0813 00:07:42.921510  828238 status.go:253] checking status of multinode-20210813000359-820289 ...
	I0813 00:07:42.921854  828238 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 00:07:42.921899  828238 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 00:07:42.933936  828238 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:38239
	I0813 00:07:42.934401  828238 main.go:130] libmachine: () Calling .GetVersion
	I0813 00:07:42.935008  828238 main.go:130] libmachine: Using API Version  1
	I0813 00:07:42.935030  828238 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 00:07:42.935362  828238 main.go:130] libmachine: () Calling .GetMachineName
	I0813 00:07:42.935521  828238 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetState
	I0813 00:07:42.938290  828238 status.go:328] multinode-20210813000359-820289 host status = "Running" (err=<nil>)
	I0813 00:07:42.938310  828238 host.go:66] Checking if "multinode-20210813000359-820289" exists ...
	I0813 00:07:42.938688  828238 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 00:07:42.938741  828238 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 00:07:42.948989  828238 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43009
	I0813 00:07:42.949382  828238 main.go:130] libmachine: () Calling .GetVersion
	I0813 00:07:42.949819  828238 main.go:130] libmachine: Using API Version  1
	I0813 00:07:42.949841  828238 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 00:07:42.950170  828238 main.go:130] libmachine: () Calling .GetMachineName
	I0813 00:07:42.950334  828238 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetIP
	I0813 00:07:42.955597  828238 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:07:42.956001  828238 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e4:55", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:04:13 +0000 UTC Type:0 Mac:52:54:00:b5:e4:55 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:multinode-20210813000359-820289 Clientid:01:52:54:00:b5:e4:55}
	I0813 00:07:42.956038  828238 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined IP address 192.168.39.22 and MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:07:42.956088  828238 host.go:66] Checking if "multinode-20210813000359-820289" exists ...
	I0813 00:07:42.956397  828238 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 00:07:42.956427  828238 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 00:07:42.966552  828238 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:39503
	I0813 00:07:42.966948  828238 main.go:130] libmachine: () Calling .GetVersion
	I0813 00:07:42.967382  828238 main.go:130] libmachine: Using API Version  1
	I0813 00:07:42.967403  828238 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 00:07:42.967773  828238 main.go:130] libmachine: () Calling .GetMachineName
	I0813 00:07:42.967941  828238 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .DriverName
	I0813 00:07:42.968119  828238 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 00:07:42.968155  828238 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHHostname
	I0813 00:07:42.972932  828238 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:07:42.973314  828238 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e4:55", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:04:13 +0000 UTC Type:0 Mac:52:54:00:b5:e4:55 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:multinode-20210813000359-820289 Clientid:01:52:54:00:b5:e4:55}
	I0813 00:07:42.973341  828238 main.go:130] libmachine: (multinode-20210813000359-820289) DBG | domain multinode-20210813000359-820289 has defined IP address 192.168.39.22 and MAC address 52:54:00:b5:e4:55 in network mk-multinode-20210813000359-820289
	I0813 00:07:42.973492  828238 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHPort
	I0813 00:07:42.973635  828238 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHKeyPath
	I0813 00:07:42.973776  828238 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetSSHUsername
	I0813 00:07:42.973910  828238 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813000359-820289/id_rsa Username:docker}
	I0813 00:07:43.068244  828238 ssh_runner.go:149] Run: systemctl --version
	I0813 00:07:43.074368  828238 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 00:07:43.086243  828238 kubeconfig.go:93] found "multinode-20210813000359-820289" server: "https://192.168.39.22:8443"
	I0813 00:07:43.086265  828238 api_server.go:164] Checking apiserver status ...
	I0813 00:07:43.086294  828238 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 00:07:43.097189  828238 ssh_runner.go:149] Run: sudo egrep ^[0-9]+:freezer: /proc/2627/cgroup
	I0813 00:07:43.103364  828238 api_server.go:180] apiserver freezer: "11:freezer:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0dcc263218298eb0bc9dd91ad6c2c6d.slice/crio-fe1711400a92c91e4a417968e5aa7d64e9b5f216105c1ce6f378be5ba2438982.scope"
	I0813 00:07:43.103422  828238 ssh_runner.go:149] Run: sudo cat /sys/fs/cgroup/freezer/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0dcc263218298eb0bc9dd91ad6c2c6d.slice/crio-fe1711400a92c91e4a417968e5aa7d64e9b5f216105c1ce6f378be5ba2438982.scope/freezer.state
	I0813 00:07:43.110242  828238 api_server.go:202] freezer state: "THAWED"
	I0813 00:07:43.110271  828238 api_server.go:239] Checking apiserver healthz at https://192.168.39.22:8443/healthz ...
	I0813 00:07:43.118548  828238 api_server.go:265] https://192.168.39.22:8443/healthz returned 200:
	ok
	I0813 00:07:43.118572  828238 status.go:419] multinode-20210813000359-820289 apiserver status = Running (err=<nil>)
	I0813 00:07:43.118580  828238 status.go:255] multinode-20210813000359-820289 status: &{Name:multinode-20210813000359-820289 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0813 00:07:43.118594  828238 status.go:253] checking status of multinode-20210813000359-820289-m02 ...
	I0813 00:07:43.118952  828238 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 00:07:43.118989  828238 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 00:07:43.130465  828238 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44677
	I0813 00:07:43.130869  828238 main.go:130] libmachine: () Calling .GetVersion
	I0813 00:07:43.131386  828238 main.go:130] libmachine: Using API Version  1
	I0813 00:07:43.131407  828238 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 00:07:43.131810  828238 main.go:130] libmachine: () Calling .GetMachineName
	I0813 00:07:43.132033  828238 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetState
	I0813 00:07:43.135235  828238 status.go:328] multinode-20210813000359-820289-m02 host status = "Running" (err=<nil>)
	I0813 00:07:43.135250  828238 host.go:66] Checking if "multinode-20210813000359-820289-m02" exists ...
	I0813 00:07:43.135567  828238 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 00:07:43.135600  828238 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 00:07:43.145765  828238 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:39337
	I0813 00:07:43.146141  828238 main.go:130] libmachine: () Calling .GetVersion
	I0813 00:07:43.146526  828238 main.go:130] libmachine: Using API Version  1
	I0813 00:07:43.146544  828238 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 00:07:43.146910  828238 main.go:130] libmachine: () Calling .GetMachineName
	I0813 00:07:43.147079  828238 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetIP
	I0813 00:07:43.151763  828238 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:07:43.152123  828238 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b0:8d", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:05:25 +0000 UTC Type:0 Mac:52:54:00:9e:b0:8d Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:multinode-20210813000359-820289-m02 Clientid:01:52:54:00:9e:b0:8d}
	I0813 00:07:43.152156  828238 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined IP address 192.168.39.152 and MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:07:43.152278  828238 host.go:66] Checking if "multinode-20210813000359-820289-m02" exists ...
	I0813 00:07:43.152682  828238 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 00:07:43.152730  828238 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 00:07:43.162883  828238 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:38211
	I0813 00:07:43.163268  828238 main.go:130] libmachine: () Calling .GetVersion
	I0813 00:07:43.163732  828238 main.go:130] libmachine: Using API Version  1
	I0813 00:07:43.163754  828238 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 00:07:43.164067  828238 main.go:130] libmachine: () Calling .GetMachineName
	I0813 00:07:43.164225  828238 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .DriverName
	I0813 00:07:43.164378  828238 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 00:07:43.164402  828238 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHHostname
	I0813 00:07:43.169470  828238 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:07:43.169867  828238 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:b0:8d", ip: ""} in network mk-multinode-20210813000359-820289: {Iface:virbr1 ExpiryTime:2021-08-13 01:05:25 +0000 UTC Type:0 Mac:52:54:00:9e:b0:8d Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:multinode-20210813000359-820289-m02 Clientid:01:52:54:00:9e:b0:8d}
	I0813 00:07:43.169903  828238 main.go:130] libmachine: (multinode-20210813000359-820289-m02) DBG | domain multinode-20210813000359-820289-m02 has defined IP address 192.168.39.152 and MAC address 52:54:00:9e:b0:8d in network mk-multinode-20210813000359-820289
	I0813 00:07:43.169981  828238 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHPort
	I0813 00:07:43.170168  828238 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHKeyPath
	I0813 00:07:43.170356  828238 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetSSHUsername
	I0813 00:07:43.170513  828238 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813000359-820289-m02/id_rsa Username:docker}
	I0813 00:07:43.262729  828238 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 00:07:43.273002  828238 status.go:255] multinode-20210813000359-820289-m02 status: &{Name:multinode-20210813000359-820289-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0813 00:07:43.273037  828238 status.go:253] checking status of multinode-20210813000359-820289-m03 ...
	I0813 00:07:43.273524  828238 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 00:07:43.273572  828238 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 00:07:43.285193  828238 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:39159
	I0813 00:07:43.285621  828238 main.go:130] libmachine: () Calling .GetVersion
	I0813 00:07:43.286134  828238 main.go:130] libmachine: Using API Version  1
	I0813 00:07:43.286156  828238 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 00:07:43.286492  828238 main.go:130] libmachine: () Calling .GetMachineName
	I0813 00:07:43.286663  828238 main.go:130] libmachine: (multinode-20210813000359-820289-m03) Calling .GetState
	I0813 00:07:43.289536  828238 status.go:328] multinode-20210813000359-820289-m03 host status = "Stopped" (err=<nil>)
	I0813 00:07:43.289554  828238 status.go:341] host is not running, skipping remaining checks
	I0813 00:07:43.289558  828238 status.go:255] multinode-20210813000359-820289-m03 status: &{Name:multinode-20210813000359-820289-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.93s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (48.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:235: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813000359-820289 node start m03 --alsologtostderr
E0813 00:08:03.458852  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812235933-820289/client.crt: no such file or directory
multinode_test.go:235: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210813000359-820289 node start m03 --alsologtostderr: (47.875419706s)
multinode_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813000359-820289 status
multinode_test.go:256: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (48.52s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (176.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:264: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20210813000359-820289
multinode_test.go:271: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-20210813000359-820289
multinode_test.go:271: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-20210813000359-820289: (7.15494148s)
multinode_test.go:276: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210813000359-820289 --wait=true -v=8 --alsologtostderr
E0813 00:08:50.748407  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235029-820289/client.crt: no such file or directory
E0813 00:09:25.380040  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812235933-820289/client.crt: no such file or directory
multinode_test.go:276: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210813000359-820289 --wait=true -v=8 --alsologtostderr: (2m49.283755625s)
multinode_test.go:281: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20210813000359-820289
--- PASS: TestMultiNode/serial/RestartKeepsNodes (176.55s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:375: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813000359-820289 node delete m03
multinode_test.go:375: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210813000359-820289 node delete m03: (1.360891113s)
multinode_test.go:381: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813000359-820289 status --alsologtostderr
multinode_test.go:405: (dbg) Run:  kubectl get nodes
multinode_test.go:413: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.88s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (5.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813000359-820289 stop
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210813000359-820289 stop: (5.125966437s)
multinode_test.go:301: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813000359-820289 status
multinode_test.go:301: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210813000359-820289 status: exit status 7 (79.830797ms)

                                                
                                                
-- stdout --
	multinode-20210813000359-820289
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20210813000359-820289-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813000359-820289 status --alsologtostderr
multinode_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210813000359-820289 status --alsologtostderr: exit status 7 (79.340874ms)

                                                
                                                
-- stdout --
	multinode-20210813000359-820289
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20210813000359-820289-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 00:11:35.498664  829380 out.go:298] Setting OutFile to fd 1 ...
	I0813 00:11:35.498829  829380 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 00:11:35.498837  829380 out.go:311] Setting ErrFile to fd 2...
	I0813 00:11:35.498840  829380 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 00:11:35.498924  829380 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	I0813 00:11:35.499070  829380 out.go:305] Setting JSON to false
	I0813 00:11:35.499088  829380 mustload.go:65] Loading cluster: multinode-20210813000359-820289
	I0813 00:11:35.499371  829380 status.go:253] checking status of multinode-20210813000359-820289 ...
	I0813 00:11:35.499704  829380 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 00:11:35.499765  829380 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 00:11:35.510245  829380 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35283
	I0813 00:11:35.510705  829380 main.go:130] libmachine: () Calling .GetVersion
	I0813 00:11:35.511188  829380 main.go:130] libmachine: Using API Version  1
	I0813 00:11:35.511209  829380 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 00:11:35.511545  829380 main.go:130] libmachine: () Calling .GetMachineName
	I0813 00:11:35.511736  829380 main.go:130] libmachine: (multinode-20210813000359-820289) Calling .GetState
	I0813 00:11:35.514391  829380 status.go:328] multinode-20210813000359-820289 host status = "Stopped" (err=<nil>)
	I0813 00:11:35.514404  829380 status.go:341] host is not running, skipping remaining checks
	I0813 00:11:35.514408  829380 status.go:255] multinode-20210813000359-820289 status: &{Name:multinode-20210813000359-820289 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0813 00:11:35.514421  829380 status.go:253] checking status of multinode-20210813000359-820289-m02 ...
	I0813 00:11:35.514705  829380 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 00:11:35.514742  829380 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0813 00:11:35.524641  829380 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:36149
	I0813 00:11:35.525017  829380 main.go:130] libmachine: () Calling .GetVersion
	I0813 00:11:35.525459  829380 main.go:130] libmachine: Using API Version  1
	I0813 00:11:35.525486  829380 main.go:130] libmachine: () Calling .SetConfigRaw
	I0813 00:11:35.525798  829380 main.go:130] libmachine: () Calling .GetMachineName
	I0813 00:11:35.525972  829380 main.go:130] libmachine: (multinode-20210813000359-820289-m02) Calling .GetState
	I0813 00:11:35.528475  829380 status.go:328] multinode-20210813000359-820289-m02 host status = "Stopped" (err=<nil>)
	I0813 00:11:35.528491  829380 status.go:341] host is not running, skipping remaining checks
	I0813 00:11:35.528497  829380 status.go:255] multinode-20210813000359-820289-m02 status: &{Name:multinode-20210813000359-820289-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (5.29s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (115.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:335: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210813000359-820289 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0813 00:11:41.535467  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812235933-820289/client.crt: no such file or directory
E0813 00:12:09.220483  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812235933-820289/client.crt: no such file or directory
multinode_test.go:335: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210813000359-820289 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m54.932343436s)
multinode_test.go:341: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813000359-820289 status --alsologtostderr
multinode_test.go:355: (dbg) Run:  kubectl get nodes
multinode_test.go:363: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (115.46s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (61.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:424: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20210813000359-820289
multinode_test.go:433: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210813000359-820289-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:433: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-20210813000359-820289-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (103.513564ms)

                                                
                                                
-- stdout --
	* [multinode-20210813000359-820289-m02] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	  - MINIKUBE_LOCATION=12230
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20210813000359-820289-m02' is duplicated with machine name 'multinode-20210813000359-820289-m02' in profile 'multinode-20210813000359-820289'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:441: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210813000359-820289-m03 --driver=kvm2  --container-runtime=crio
E0813 00:13:50.748368  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235029-820289/client.crt: no such file or directory
multinode_test.go:441: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210813000359-820289-m03 --driver=kvm2  --container-runtime=crio: (1m0.690010223s)
multinode_test.go:448: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20210813000359-820289
multinode_test.go:448: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-20210813000359-820289: exit status 80 (229.92819ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20210813000359-820289
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20210813000359-820289-m03 already exists in multinode-20210813000359-820289-m03 profile
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	╭─────────────────────────────────────────────────────────────────────────────╮
	│                                                                             │
	│    * If the above advice does not help, please let us know:                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose               │
	│                                                                             │
	│    * Please attach the following file to the GitHub issue:                  │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:453: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-20210813000359-820289-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (61.98s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:sid/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:sid/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:sid/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver (10.88s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/KVM_Linux_crio_integration/out:/var/tmp debian:sid sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/KVM_Linux_crio_integration/out:/var/tmp debian:sid sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (10.881387295s)
--- PASS: TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver (10.88s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:latest/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:latest/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:latest/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver (9.96s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/KVM_Linux_crio_integration/out:/var/tmp debian:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/KVM_Linux_crio_integration/out:/var/tmp debian:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (9.960828635s)
--- PASS: TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver (9.96s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:10/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:10/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:10/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:10/kvm2-driver (9.82s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:10/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/KVM_Linux_crio_integration/out:/var/tmp debian:10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/KVM_Linux_crio_integration/out:/var/tmp debian:10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (9.82389492s)
--- PASS: TestDebPackageInstall/install_amd64_debian:10/kvm2-driver (9.82s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:9/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:9/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:9/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:9/kvm2-driver (8.11s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:9/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/KVM_Linux_crio_integration/out:/var/tmp debian:9 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
E0813 00:15:13.793381  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235029-820289/client.crt: no such file or directory
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/KVM_Linux_crio_integration/out:/var/tmp debian:9 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (8.107823347s)
--- PASS: TestDebPackageInstall/install_amd64_debian:9/kvm2-driver (8.11s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:latest/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:latest/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:latest/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver (16.02s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/KVM_Linux_crio_integration/out:/var/tmp ubuntu:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/KVM_Linux_crio_integration/out:/var/tmp ubuntu:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (16.018024146s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver (16.02s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver (14.69s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/KVM_Linux_crio_integration/out:/var/tmp ubuntu:20.10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/KVM_Linux_crio_integration/out:/var/tmp ubuntu:20.10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (14.691214854s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver (14.69s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver (18.12s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/KVM_Linux_crio_integration/out:/var/tmp ubuntu:20.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/KVM_Linux_crio_integration/out:/var/tmp ubuntu:20.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (18.118677047s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver (18.12s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver (18.18s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/KVM_Linux_crio_integration/out:/var/tmp ubuntu:18.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/KVM_Linux_crio_integration/out:/var/tmp ubuntu:18.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (18.180753862s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver (18.18s)

                                                
                                    
x
+
TestScheduledStopUnix (88.06s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-20210813001908-820289 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-20210813001908-820289 --memory=2048 --driver=kvm2  --container-runtime=crio: (59.099745087s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210813001908-820289 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20210813001908-820289 -n scheduled-stop-20210813001908-820289
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210813001908-820289 --schedule 8s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210813001908-820289 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210813001908-820289 -n scheduled-stop-20210813001908-820289
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20210813001908-820289
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210813001908-820289 --schedule 5s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20210813001908-820289
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-20210813001908-820289: exit status 7 (68.254544ms)

                                                
                                                
-- stdout --
	scheduled-stop-20210813001908-820289
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210813001908-820289 -n scheduled-stop-20210813001908-820289
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210813001908-820289 -n scheduled-stop-20210813001908-820289: exit status 7 (63.15428ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-20210813001908-820289" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-20210813001908-820289
--- PASS: TestScheduledStopUnix (88.06s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (249.01s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Run:  /tmp/minikube-v1.6.2.756079223.exe start -p running-upgrade-20210813002036-820289 --memory=2200 --vm-driver=kvm2  --container-runtime=crio

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Non-zero exit: /tmp/minikube-v1.6.2.756079223.exe start -p running-upgrade-20210813002036-820289 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: exit status 70 (3.792829378s)

                                                
                                                
-- stdout --
	! [running-upgrade-20210813002036-820289] minikube v1.6.2 on Debian 9.13
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	  - MINIKUBE_LOCATION=12230
	  - KUBECONFIG=/tmp/legacy_kubeconfig465494378
	* Selecting 'kvm2' driver from user configuration (alternates: [none])
	* Downloading VM boot image ...

                                                
                                                
-- /stdout --
** stderr ** 
	* minikube 1.22.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.22.0
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	
	! 'kvm2' driver reported an issue: /usr/bin/virsh domcapabilities --virttype kvm failed:
	error: failed to get emulator capabilities
	error: invalid argument: KVM is not supported by '/usr/bin/qemu-system-x86_64' on this host
	* Suggestion: Follow your Linux distribution instructions for configuring KVM
	* Documentation: https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/
	
	    > minikube-v1.6.0.iso.sha256: 65 B / 65 B [--------------] 100.00% ? p/s 0s    > minikube-v1.6.0.iso: 8.60 MiB / 150.93 MiB [>_____________] 5.70% ? p/s ?    > minikube-v1.6.0.iso: 19.20 MiB / 150.93 MiB [->__________] 12.72% ? p/s ?    > minikube-v1.6.0.iso: 31.26 MiB / 150.93 MiB [-->_________] 20.71% ? p/s ?    > minikube-v1.6.0.iso: 80.00 MiB / 150.93 MiB  53.00% 118.99 MiB p/s ETA 0s    > minikube-v1.6.0.iso: 104.00 MiB / 150.93 MiB  68.91% 118.99 MiB p/s ETA 0    > minikube-v1.6.0.iso: 109.60 MiB / 150.93 MiB  72.62% 118.99 MiB p/s ETA 0    > minikube-v1.6.0.iso: 150.93 MiB / 150.93 MiB [] 100.00% 126.43 MiB p/s 1s* 
	X Failed to cache ISO: rename /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/iso/minikube-v1.6.0.iso.download /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/iso/minikube-v1.6.0.iso: no such file or directory
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:128: (dbg) Run:  /tmp/minikube-v1.6.2.756079223.exe start -p running-upgrade-20210813002036-820289 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0813 00:21:41.536083  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812235933-820289/client.crt: no such file or directory

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Done: /tmp/minikube-v1.6.2.756079223.exe start -p running-upgrade-20210813002036-820289 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m43.175029806s)
version_upgrade_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-20210813002036-820289 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-20210813002036-820289 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m19.786711347s)
helpers_test.go:176: Cleaning up "running-upgrade-20210813002036-820289" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-20210813002036-820289
--- PASS: TestRunningBinaryUpgrade (249.01s)

                                                
                                    
x
+
TestKubernetesUpgrade (178.72s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:224: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210813002329-820289 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:224: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210813002329-820289 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m18.258310895s)
version_upgrade_test.go:229: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-20210813002329-820289
version_upgrade_test.go:229: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-20210813002329-820289: (3.119934225s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-20210813002329-820289 status --format={{.Host}}
version_upgrade_test.go:234: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-20210813002329-820289 status --format={{.Host}}: exit status 7 (83.926089ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:236: status error: exit status 7 (may be ok)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210813002329-820289 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:245: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210813002329-820289 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m23.468000917s)
version_upgrade_test.go:250: (dbg) Run:  kubectl --context kubernetes-upgrade-20210813002329-820289 version --output=json
version_upgrade_test.go:269: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:271: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210813002329-820289 --memory=2200 --kubernetes-version=v1.14.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:271: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210813002329-820289 --memory=2200 --kubernetes-version=v1.14.0 --driver=kvm2  --container-runtime=crio: exit status 106 (150.855076ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20210813002329-820289] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	  - MINIKUBE_LOCATION=12230
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.22.0-rc.0 cluster to v1.14.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.14.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20210813002329-820289
	    minikube start -p kubernetes-upgrade-20210813002329-820289 --kubernetes-version=v1.14.0
	    
	    2) Create a second cluster with Kubernetes 1.14.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20210813002329-8202892 --kubernetes-version=v1.14.0
	    
	    3) Use the existing cluster at version Kubernetes 1.22.0-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20210813002329-820289 --kubernetes-version=v1.22.0-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:275: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:277: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210813002329-820289 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:277: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210813002329-820289 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (11.927356801s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-20210813002329-820289" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-20210813002329-820289
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-20210813002329-820289: (1.626021582s)
--- PASS: TestKubernetesUpgrade (178.72s)

                                                
                                    
x
+
TestPause/serial/Start (93.95s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:77: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20210813002347-820289 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
E0813 00:23:50.749282  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235029-820289/client.crt: no such file or directory

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:77: (dbg) Done: out/minikube-linux-amd64 start -p pause-20210813002347-820289 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m33.949205133s)
--- PASS: TestPause/serial/Start (93.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:213: (dbg) Run:  out/minikube-linux-amd64 start -p false-20210813002446-820289 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:213: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-20210813002446-820289 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (139.386983ms)

                                                
                                                
-- stdout --
	* [false-20210813002446-820289] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	  - MINIKUBE_LOCATION=12230
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 00:24:46.194121  858047 out.go:298] Setting OutFile to fd 1 ...
	I0813 00:24:46.194224  858047 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 00:24:46.194259  858047 out.go:311] Setting ErrFile to fd 2...
	I0813 00:24:46.194264  858047 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 00:24:46.194398  858047 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	I0813 00:24:46.194754  858047 out.go:305] Setting JSON to false
	I0813 00:24:46.231497  858047 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-14","uptime":14849,"bootTime":1628799437,"procs":182,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 00:24:46.231632  858047 start.go:121] virtualization: kvm guest
	I0813 00:24:46.234453  858047 out.go:177] * [false-20210813002446-820289] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 00:24:46.234606  858047 notify.go:169] Checking for updates...
	I0813 00:24:46.236032  858047 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0813 00:24:46.237477  858047 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 00:24:46.238912  858047 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	I0813 00:24:46.240174  858047 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 00:24:46.240837  858047 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 00:24:46.271629  858047 out.go:177] * Using the kvm2 driver based on user configuration
	I0813 00:24:46.271653  858047 start.go:278] selected driver: kvm2
	I0813 00:24:46.271658  858047 start.go:751] validating driver "kvm2" against <nil>
	I0813 00:24:46.271677  858047 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0813 00:24:46.273696  858047 out.go:177] 
	W0813 00:24:46.273790  858047 out.go:242] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0813 00:24:46.275077  858047 out.go:177] 

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "false-20210813002446-820289" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-20210813002446-820289
--- PASS: TestNetworkPlugins/group/false (0.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (140.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20210813002446-820289 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.14.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20210813002446-820289 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.14.0: (2m20.599351711s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (140.60s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (48.1s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:89: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20210813002347-820289 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:89: (dbg) Done: out/minikube-linux-amd64 start -p pause-20210813002347-820289 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (48.082306622s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (48.10s)

                                                
                                    
x
+
TestPause/serial/Pause (0.98s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:107: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20210813002347-820289 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.98s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.28s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-20210813002347-820289 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-20210813002347-820289 --output=json --layout=cluster: exit status 2 (279.9588ms)

                                                
                                                
-- stdout --
	{"Name":"pause-20210813002347-820289","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.22.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20210813002347-820289","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.28s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.94s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:118: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-20210813002347-820289 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.94s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (5.95s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:107: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20210813002347-820289 --alsologtostderr -v=5

                                                
                                                
=== CONT  TestPause/serial/PauseAgain
pause_test.go:107: (dbg) Done: out/minikube-linux-amd64 pause -p pause-20210813002347-820289 --alsologtostderr -v=5: (5.954598039s)
--- PASS: TestPause/serial/PauseAgain (5.95s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.66s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:129: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-20210813002347-820289 --alsologtostderr -v=5
pause_test.go:129: (dbg) Done: out/minikube-linux-amd64 delete -p pause-20210813002347-820289 --alsologtostderr -v=5: (1.661841441s)
--- PASS: TestPause/serial/DeletePaused (1.66s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.63s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:139: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.63s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (167.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20210813002620-820289 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.22.0-rc.0

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20210813002620-820289 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.22.0-rc.0: (2m47.267124076s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (167.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (114.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20210813002628-820289 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.21.3
E0813 00:26:41.535648  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812235933-820289/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20210813002628-820289 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.21.3: (1m54.069145719s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (114.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context old-k8s-version-20210813002446-820289 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [3116ce40-fbcd-11eb-8891-525400b1b786] Pending
helpers_test.go:343: "busybox" [3116ce40-fbcd-11eb-8891-525400b1b786] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [3116ce40-fbcd-11eb-8891-525400b1b786] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:169: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.033264604s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context old-k8s-version-20210813002446-820289 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.76s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.29s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:208: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-20210813002036-820289

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:208: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-20210813002036-820289: (1.285415198s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-20210813002446-820289 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-20210813002446-820289 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.317263402s)
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context old-k8s-version-20210813002446-820289 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (94.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20210813002719-820289 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.21.3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20210813002719-820289 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.21.3: (1m34.149657993s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (94.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (10.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-20210813002446-820289 --alsologtostderr -v=3
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-20210813002446-820289 --alsologtostderr -v=3: (10.144128744s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (10.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210813002446-820289 -n old-k8s-version-20210813002446-820289
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210813002446-820289 -n old-k8s-version-20210813002446-820289: exit status 7 (89.834416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-20210813002446-820289 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (86.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20210813002446-820289 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.14.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20210813002446-820289 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.14.0: (1m26.548521615s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210813002446-820289 -n old-k8s-version-20210813002446-820289
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (86.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context embed-certs-20210813002628-820289 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [5023cbd5-d488-4fee-938d-9e8c70d0281e] Pending
helpers_test.go:343: "busybox" [5023cbd5-d488-4fee-938d-9e8c70d0281e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [5023cbd5-d488-4fee-938d-9e8c70d0281e] Running
start_stop_delete_test.go:169: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.032290038s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context embed-certs-20210813002628-820289 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.76s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-20210813002628-820289 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:178: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-20210813002628-820289 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.240291666s)
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context embed-certs-20210813002628-820289 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (4.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-20210813002628-820289 --alsologtostderr -v=3
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-20210813002628-820289 --alsologtostderr -v=3: (4.151358692s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (4.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210813002628-820289 -n embed-certs-20210813002628-820289
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210813002628-820289 -n embed-certs-20210813002628-820289: exit status 7 (98.348941ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-20210813002628-820289 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (400.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20210813002628-820289 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.21.3
E0813 00:28:50.749158  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235029-820289/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20210813002628-820289 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.21.3: (6m39.711394466s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210813002628-820289 -n embed-certs-20210813002628-820289
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (400.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (11.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context default-k8s-different-port-20210813002719-820289 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [dbe70385-b3ed-43b8-940f-85aa40fd329a] Pending
helpers_test.go:343: "busybox" [dbe70385-b3ed-43b8-940f-85aa40fd329a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
helpers_test.go:343: "busybox" [dbe70385-b3ed-43b8-940f-85aa40fd329a] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:169: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 10.04688021s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context default-k8s-different-port-20210813002719-820289 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (11.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (8.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-5d8978d65d-4zl79" [7296c706-fbcd-11eb-8bfb-525400b1b786] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:343: "kubernetes-dashboard-5d8978d65d-4zl79" [7296c706-fbcd-11eb-8bfb-525400b1b786] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.265478333s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (8.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (1.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-different-port-20210813002719-820289 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-different-port-20210813002719-820289 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.682536975s)
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context default-k8s-different-port-20210813002719-820289 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (1.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-5d8978d65d-4zl79" [7296c706-fbcd-11eb-8bfb-525400b1b786] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013521642s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context old-k8s-version-20210813002446-820289 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (5.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-different-port-20210813002719-820289 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-different-port-20210813002719-820289 --alsologtostderr -v=3: (5.3972058s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (5.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (13.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context no-preload-20210813002620-820289 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [12c4a282-f71c-4635-8f62-4f146b3979c7] Pending
helpers_test.go:343: "busybox" [12c4a282-f71c-4635-8f62-4f146b3979c7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
helpers_test.go:343: "busybox" [12c4a282-f71c-4635-8f62-4f146b3979c7] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:169: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 13.090389246s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context no-preload-20210813002620-820289 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (13.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-20210813002446-820289 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:277: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-20210813002446-820289 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:284: (dbg) Done: out/minikube-linux-amd64 pause -p old-k8s-version-20210813002446-820289 --alsologtostderr -v=1: (2.392216272s)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20210813002446-820289 -n old-k8s-version-20210813002446-820289

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20210813002446-820289 -n old-k8s-version-20210813002446-820289: exit status 2 (352.029293ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20210813002446-820289 -n old-k8s-version-20210813002446-820289
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20210813002446-820289 -n old-k8s-version-20210813002446-820289: exit status 2 (315.056192ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-20210813002446-820289 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Done: out/minikube-linux-amd64 unpause -p old-k8s-version-20210813002446-820289 --alsologtostderr -v=1: (1.146517797s)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20210813002446-820289 -n old-k8s-version-20210813002446-820289

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20210813002446-820289 -n old-k8s-version-20210813002446-820289
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210813002719-820289 -n default-k8s-different-port-20210813002719-820289
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210813002719-820289 -n default-k8s-different-port-20210813002719-820289: exit status 7 (108.566296ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-different-port-20210813002719-820289 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (389.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20210813002719-820289 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.21.3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20210813002719-820289 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.21.3: (6m29.518803345s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210813002719-820289 -n default-k8s-different-port-20210813002719-820289
--- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (389.89s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (96.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20210813002919-820289 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.22.0-rc.0

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20210813002919-820289 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.22.0-rc.0: (1m36.159570129s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (96.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-20210813002620-820289 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:178: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-20210813002620-820289 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.173511202s)
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context no-preload-20210813002620-820289 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (93.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-20210813002620-820289 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-20210813002620-820289 --alsologtostderr -v=3: (1m33.544656182s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (93.54s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-20210813002919-820289 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-20210813002919-820289 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.359332823s)
start_stop_delete_test.go:184: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210813002620-820289 -n no-preload-20210813002620-820289
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210813002620-820289 -n no-preload-20210813002620-820289: exit status 7 (78.77846ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-20210813002620-820289 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (416.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20210813002620-820289 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.22.0-rc.0

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20210813002620-820289 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.22.0-rc.0: (6m56.593493801s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210813002620-820289 -n no-preload-20210813002620-820289
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (416.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (63.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-20210813002919-820289 --alsologtostderr -v=3
E0813 00:31:41.536110  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812235933-820289/client.crt: no such file or directory
E0813 00:31:53.793945  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235029-820289/client.crt: no such file or directory
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-20210813002919-820289 --alsologtostderr -v=3: (1m3.435006333s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (63.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210813002919-820289 -n newest-cni-20210813002919-820289
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210813002919-820289 -n newest-cni-20210813002919-820289: exit status 7 (68.663199ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-20210813002919-820289 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (74.54s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20210813002919-820289 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.22.0-rc.0
E0813 00:32:07.560384  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210813002446-820289/client.crt: no such file or directory
E0813 00:32:07.565790  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210813002446-820289/client.crt: no such file or directory
E0813 00:32:07.576069  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210813002446-820289/client.crt: no such file or directory
E0813 00:32:07.596341  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210813002446-820289/client.crt: no such file or directory
E0813 00:32:07.636615  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210813002446-820289/client.crt: no such file or directory
E0813 00:32:07.716957  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210813002446-820289/client.crt: no such file or directory
E0813 00:32:07.877442  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210813002446-820289/client.crt: no such file or directory
E0813 00:32:08.197810  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210813002446-820289/client.crt: no such file or directory
E0813 00:32:08.838050  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210813002446-820289/client.crt: no such file or directory
E0813 00:32:10.118311  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210813002446-820289/client.crt: no such file or directory
E0813 00:32:12.679602  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210813002446-820289/client.crt: no such file or directory
E0813 00:32:17.799874  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210813002446-820289/client.crt: no such file or directory
E0813 00:32:28.040965  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210813002446-820289/client.crt: no such file or directory
E0813 00:32:48.521429  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210813002446-820289/client.crt: no such file or directory
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20210813002919-820289 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.22.0-rc.0: (1m14.192550822s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210813002919-820289 -n newest-cni-20210813002919-820289
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (74.54s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:246: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:257: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-20210813002919-820289 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-20210813002919-820289 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-20210813002919-820289 --alsologtostderr -v=1: (1.030086222s)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20210813002919-820289 -n newest-cni-20210813002919-820289
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20210813002919-820289 -n newest-cni-20210813002919-820289: exit status 2 (258.624677ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20210813002919-820289 -n newest-cni-20210813002919-820289
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20210813002919-820289 -n newest-cni-20210813002919-820289: exit status 2 (258.510919ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-20210813002919-820289 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20210813002919-820289 -n newest-cni-20210813002919-820289
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20210813002919-820289 -n newest-cni-20210813002919-820289
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (78.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p auto-20210813002445-820289 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=kvm2  --container-runtime=crio
E0813 00:33:29.482252  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210813002446-820289/client.crt: no such file or directory
E0813 00:33:50.748522  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235029-820289/client.crt: no such file or directory
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p auto-20210813002445-820289 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=kvm2  --container-runtime=crio: (1m18.915304526s)
--- PASS: TestNetworkPlugins/group/auto/Start (78.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-20210813002445-820289 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context auto-20210813002445-820289 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-c97sl" [c7d2444f-6b72-4515-8a05-4c1838a9759a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-66fbc655d5-c97sl" [c7d2444f-6b72-4515-8a05-4c1838a9759a] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.010314518s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:162: (dbg) Run:  kubectl --context auto-20210813002445-820289 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:181: (dbg) Run:  kubectl --context auto-20210813002445-820289 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:231: (dbg) Run:  kubectl --context auto-20210813002445-820289 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (87.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-20210813002446-820289 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=kvm2  --container-runtime=crio

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-20210813002446-820289 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m27.111442099s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (87.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (7.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-gxbx9" [7a4ba24c-5d59-40e7-b31d-6d89e144b077] Running
start_stop_delete_test.go:247: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 7.025855925s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (7.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-gxbx9" [7a4ba24c-5d59-40e7-b31d-6d89e144b077] Running
start_stop_delete_test.go:260: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.153589195s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context embed-certs-20210813002628-820289 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-20210813002628-820289 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:277: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-20210813002628-820289 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20210813002628-820289 -n embed-certs-20210813002628-820289
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20210813002628-820289 -n embed-certs-20210813002628-820289: exit status 2 (267.037402ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20210813002628-820289 -n embed-certs-20210813002628-820289
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20210813002628-820289 -n embed-certs-20210813002628-820289: exit status 2 (282.911102ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-20210813002628-820289 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20210813002628-820289 -n embed-certs-20210813002628-820289
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20210813002628-820289 -n embed-certs-20210813002628-820289
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (188.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p cilium-20210813002446-820289 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=kvm2  --container-runtime=crio

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p cilium-20210813002446-820289 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=kvm2  --container-runtime=crio: (3m8.918690326s)
--- PASS: TestNetworkPlugins/group/cilium/Start (188.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-bwkqm" [f583a3d2-a542-442d-99e7-7a5ee89838cf] Running
start_stop_delete_test.go:247: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.020073701s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-bwkqm" [f583a3d2-a542-442d-99e7-7a5ee89838cf] Running
start_stop_delete_test.go:260: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011727895s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context default-k8s-different-port-20210813002719-820289 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-different-port-20210813002719-820289 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:277: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (3.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-different-port-20210813002719-820289 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-different-port-20210813002719-820289 --alsologtostderr -v=1: (1.098767924s)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20210813002719-820289 -n default-k8s-different-port-20210813002719-820289
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20210813002719-820289 -n default-k8s-different-port-20210813002719-820289: exit status 2 (287.985783ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20210813002719-820289 -n default-k8s-different-port-20210813002719-820289
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20210813002719-820289 -n default-k8s-different-port-20210813002719-820289: exit status 2 (261.873858ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-different-port-20210813002719-820289 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20210813002719-820289 -n default-k8s-different-port-20210813002719-820289
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20210813002719-820289 -n default-k8s-different-port-20210813002719-820289
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Pause (3.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:343: "kindnet-7ql98" [463ca504-3bb6-4c5c-9bd6-723323c6c368] Running
net_test.go:106: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.026350719s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-20210813002446-820289 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (17.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context kindnet-20210813002446-820289 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-rkv6m" [25720cd6-d3d7-4b85-87d1-ccd62f86ce37] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-66fbc655d5-rkv6m" [25720cd6-d3d7-4b85-87d1-ccd62f86ce37] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 17.012299382s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (17.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:162: (dbg) Run:  kubectl --context kindnet-20210813002446-820289 exec deployment/netcat -- nslookup kubernetes.default
E0813 00:36:41.536158  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812235933-820289/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:181: (dbg) Run:  kubectl --context kindnet-20210813002446-820289 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:231: (dbg) Run:  kubectl --context kindnet-20210813002446-820289 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (100.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-20210813002446-820289 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E0813 00:37:07.560299  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210813002446-820289/client.crt: no such file or directory
E0813 00:37:35.243053  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210813002446-820289/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-20210813002446-820289 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m40.380560642s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (100.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-fz7sn" [fbc664a1-144f-4171-a626-88f611c13115] Running
start_stop_delete_test.go:247: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.034376829s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-fz7sn" [fbc664a1-144f-4171-a626-88f611c13115] Running
start_stop_delete_test.go:260: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.019817673s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context no-preload-20210813002620-820289 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-20210813002620-820289 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-20210813002620-820289 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-20210813002620-820289 --alsologtostderr -v=1: (1.027790602s)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20210813002620-820289 -n no-preload-20210813002620-820289
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20210813002620-820289 -n no-preload-20210813002620-820289: exit status 2 (245.775004ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20210813002620-820289 -n no-preload-20210813002620-820289
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20210813002620-820289 -n no-preload-20210813002620-820289: exit status 2 (246.273402ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-20210813002620-820289 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20210813002620-820289 -n no-preload-20210813002620-820289
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20210813002620-820289 -n no-preload-20210813002620-820289
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (94.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-20210813002446-820289 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=flannel --driver=kvm2  --container-runtime=crio

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p flannel-20210813002446-820289 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m34.438439759s)
--- PASS: TestNetworkPlugins/group/flannel/Start (94.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-20210813002446-820289 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context enable-default-cni-20210813002446-820289 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-tz9fj" [518f0431-0da0-462d-8d4f-230feb0d4318] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-66fbc655d5-tz9fj" [518f0431-0da0-462d-8d4f-230feb0d4318] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.015053578s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20210813002446-820289 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:181: (dbg) Run:  kubectl --context enable-default-cni-20210813002446-820289 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:231: (dbg) Run:  kubectl --context enable-default-cni-20210813002446-820289 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (97.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-20210813002446-820289 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=kvm2  --container-runtime=crio

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p bridge-20210813002446-820289 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m37.575539277s)
--- PASS: TestNetworkPlugins/group/bridge/Start (97.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:343: "cilium-mxmmp" [86137176-890b-4a0c-aa50-e5a8712635a0] Running
E0813 00:38:50.749004  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235029-820289/client.crt: no such file or directory
net_test.go:106: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.025605212s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p cilium-20210813002446-820289 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (12.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context cilium-20210813002446-820289 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-7f8rw" [ef4b9ddb-45e8-40d6-af28-0817cedd19b3] Pending
helpers_test.go:343: "netcat-66fbc655d5-7f8rw" [ef4b9ddb-45e8-40d6-af28-0817cedd19b3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0813 00:38:54.096987  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210813002719-820289/client.crt: no such file or directory
E0813 00:38:54.103058  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210813002719-820289/client.crt: no such file or directory
E0813 00:38:54.113311  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210813002719-820289/client.crt: no such file or directory
E0813 00:38:54.133631  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210813002719-820289/client.crt: no such file or directory
E0813 00:38:54.173897  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210813002719-820289/client.crt: no such file or directory
E0813 00:38:54.254218  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210813002719-820289/client.crt: no such file or directory
E0813 00:38:54.414968  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210813002719-820289/client.crt: no such file or directory
E0813 00:38:54.735393  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210813002719-820289/client.crt: no such file or directory
E0813 00:38:55.376355  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210813002719-820289/client.crt: no such file or directory
E0813 00:38:56.657446  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210813002719-820289/client.crt: no such file or directory
helpers_test.go:343: "netcat-66fbc655d5-7f8rw" [ef4b9ddb-45e8-40d6-af28-0817cedd19b3] Running
E0813 00:38:59.217815  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210813002719-820289/client.crt: no such file or directory
E0813 00:39:04.338756  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210813002719-820289/client.crt: no such file or directory
net_test.go:145: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 12.013896772s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (12.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:162: (dbg) Run:  kubectl --context cilium-20210813002446-820289 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:181: (dbg) Run:  kubectl --context cilium-20210813002446-820289 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:231: (dbg) Run:  kubectl --context cilium-20210813002446-820289 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/Start (87.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p custom-weave-20210813002446-820289 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=kvm2  --container-runtime=crio
E0813 00:39:14.579774  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210813002719-820289/client.crt: no such file or directory
E0813 00:39:14.806561  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/no-preload-20210813002620-820289/client.crt: no such file or directory
E0813 00:39:19.927009  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/no-preload-20210813002620-820289/client.crt: no such file or directory
E0813 00:39:30.167178  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/no-preload-20210813002620-820289/client.crt: no such file or directory
E0813 00:39:35.060917  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210813002719-820289/client.crt: no such file or directory
E0813 00:39:39.455302  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210813002445-820289/client.crt: no such file or directory
E0813 00:39:39.460565  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210813002445-820289/client.crt: no such file or directory
E0813 00:39:39.470839  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210813002445-820289/client.crt: no such file or directory
E0813 00:39:39.491935  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210813002445-820289/client.crt: no such file or directory
E0813 00:39:39.532214  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210813002445-820289/client.crt: no such file or directory
E0813 00:39:39.612625  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210813002445-820289/client.crt: no such file or directory
E0813 00:39:39.773614  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210813002445-820289/client.crt: no such file or directory
E0813 00:39:40.094775  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210813002445-820289/client.crt: no such file or directory
E0813 00:39:40.735389  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210813002445-820289/client.crt: no such file or directory
E0813 00:39:42.016545  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210813002445-820289/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p custom-weave-20210813002446-820289 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=kvm2  --container-runtime=crio: (1m27.66756902s)
--- PASS: TestNetworkPlugins/group/custom-weave/Start (87.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-system" ...
helpers_test.go:343: "kube-flannel-ds-amd64-5sbst" [419fe37b-c14b-495b-bbbf-d0567d54e4f4] Running
E0813 00:39:44.577694  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210813002445-820289/client.crt: no such file or directory
E0813 00:39:44.582857  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210812235933-820289/client.crt: no such file or directory
net_test.go:106: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.021733792s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (1.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-20210813002446-820289 "pgrep -a kubelet"
net_test.go:119: (dbg) Done: out/minikube-linux-amd64 ssh -p flannel-20210813002446-820289 "pgrep -a kubelet": (1.218028521s)
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (1.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context flannel-20210813002446-820289 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-kj7wd" [f0d185e2-a01c-47e4-9f85-da029af91f0d] Pending
helpers_test.go:343: "netcat-66fbc655d5-kj7wd" [f0d185e2-a01c-47e4-9f85-da029af91f0d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0813 00:39:50.647853  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/no-preload-20210813002620-820289/client.crt: no such file or directory
E0813 00:39:51.230180  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210813002445-820289/client.crt: no such file or directory
helpers_test.go:343: "netcat-66fbc655d5-kj7wd" [f0d185e2-a01c-47e4-9f85-da029af91f0d] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.023656925s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:162: (dbg) Run:  kubectl --context flannel-20210813002446-820289 exec deployment/netcat -- nslookup kubernetes.default
E0813 00:40:01.470641  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210813002445-820289/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:181: (dbg) Run:  kubectl --context flannel-20210813002446-820289 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:231: (dbg) Run:  kubectl --context flannel-20210813002446-820289 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-20210813002446-820289 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context bridge-20210813002446-820289 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-mqqvc" [aab24804-a51d-42d0-ad4c-cc6d1e8094ae] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0813 00:40:16.021388  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210813002719-820289/client.crt: no such file or directory
helpers_test.go:343: "netcat-66fbc655d5-mqqvc" [aab24804-a51d-42d0-ad4c-cc6d1e8094ae] Running
E0813 00:40:21.951088  820289 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-crio-12230-816896-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210813002445-820289/client.crt: no such file or directory
net_test.go:145: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.022200759s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:162: (dbg) Run:  kubectl --context bridge-20210813002446-820289 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:181: (dbg) Run:  kubectl --context bridge-20210813002446-820289 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:231: (dbg) Run:  kubectl --context bridge-20210813002446-820289 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-weave-20210813002446-820289 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-weave/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/NetCatPod (10.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context custom-weave-20210813002446-820289 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-tj5t8" [5f71c8c3-0e98-4f9c-9cc1-1b2156d9df99] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-66fbc655d5-tj5t8" [5f71c8c3-0e98-4f9c-9cc1-1b2156d9df99] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: app=netcat healthy within 10.011399636s
--- PASS: TestNetworkPlugins/group/custom-weave/NetCatPod (10.50s)

                                                
                                    

Test skip (28/263)

x
+
TestDownloadOnly/v1.14.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.14.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.14.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/kubectl
aaa_download_only_test.go:154: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.14.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.21.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.21.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/kubectl
aaa_download_only_test.go:154: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.21.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.22.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.22.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/kubectl
aaa_download_only_test.go:154: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.22.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:212: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:35: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:115: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:188: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:467: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:527: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:39: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:43: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:43: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:286: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:91: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-20210813002719-820289" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-20210813002719-820289
--- SKIP: TestStartStop/group/disable-driver-mounts (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:88: Skipping the test as crio container runtimes requires CNI
helpers_test.go:176: Cleaning up "kubenet-20210813002445-820289" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-20210813002445-820289
--- SKIP: TestNetworkPlugins/group/kubenet (0.39s)

                                                
                                    
Copied to clipboard